robust way possible! Diverse training opportunities and social benefits (e.g. UK pension schema) What do you offer? Strong hands-on experience working with modern Big Data technologies such as Apache Spark, Trino, Apache Kafka, Apache Hadoop, Apache HBase, Apache Nifi, Apache Airflow, Opensearch Proficiency in cloud-native technologies such as containerization and Kubernetes Strong More ❯
and Snowflake Marketplace. Familiarity with Snowpark for Python/Java-based transformations. Understanding of role-based access control, data masking, and time travel features. Databricks: Hands-on experience with Apache Spark and Databricks Runtime. Proficiency in Delta Lake for ACID-compliant data lakes. Experience with Structured Streaming and Auto Loader. Familiarity with MLflow, Feature Store, and Model Registry. Use More ❯
Strong programming and integration skills: Java, JUNIT, REST services, DQL, SQL, GWT Proficiency in XML, XSD, xDB, XSL Operating systems: Linux and Windows Web and directory services: ApacheHTTPD, Tomcat, Apache DS DevOps tools: Git, Jenkins, SonarQube, Bitbucket, Octopus More ❯
production issues. Optimize applications for performance and responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like Apache Spark, Databricks, Apache Pulsar, Apache Airflow, Temporal, and Apache Flink, sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes, and … Experience with cloud platforms like AWS, GCP, or Azure. DevOps Tools: Familiarity with containerization (Docker) and infrastructure automation tools like Terraform or Ansible. Real-time Data Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with Apache Spark, Databricks, or similar big data platforms for processing … large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with Apache Flink or other stream processing frameworks is a plus. Desired Skills Asynchronous Programming: Familiarity with asynchronous programming tools like Celery or asyncio. Frontend Knowledge: Exposure More ❯
Snowflake, Databricks) Strong DevOps mindset with experience in CI/CD pipelines, monitoring, and observability tools (Grafana or equivalent). Exposure to analytics, reporting, and BI tools such as Apache Superset, Lightdash or OpenSearch Willingness to work across the stack by contributing to API development and, at times, UI components (Vue.js, Zoho, or similar). Excellent communication and collaboration More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Involved Solutions
customer data Continuously improve existing systems, introducing new technologies and methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and Apache Spark, including performance tuning and advanced concepts such as Delta Lake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing and More ❯
data quality, or other areas directly relevant to data engineering responsibilities and tasks Proven project experience developing and maintaining data warehouses in big data solutions (Snowflake) Expert knowledge in Apache technologies such as Kafka, Airflow, and Spark to build scalable and efficient data pipelines Ability to design, build, and deploy data solutions that capture, explore, transform, and utilize data More ❯
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and Apache Spark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns, ETL More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and Apache Spark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns, ETL More ❯
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and Apache Spark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns, ETL More ❯
london, south east england, united kingdom Hybrid / WFH Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and Apache Spark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns, ETL More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and Apache Spark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns, ETL More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and Apache Spark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns, ETL More ❯
in AWS. Strong expertise with AWS services, including Glue, Redshift, Data Catalog, and large-scale data storage solutions such as data lakes. Proficiency in ETL/ELT tools (e.g. Apache Spark, Airflow, dbt). Skilled in data processing languages such as Python, Java, and SQL. Strong knowledge of data warehousing, data lakes, and data lakehouse architectures. Excellent analytical and More ❯
bradford, yorkshire and the humber, united kingdom
Fruition Group
in AWS. Strong expertise with AWS services, including Glue, Redshift, Data Catalog, and large-scale data storage solutions such as data lakes. Proficiency in ETL/ELT tools (e.g. Apache Spark, Airflow, dbt). Skilled in data processing languages such as Python, Java, and SQL. Strong knowledge of data warehousing, data lakes, and data lakehouse architectures. Excellent analytical and More ❯
scalable environments for our data platforms. You will leverage cloud-native technologies and AWS tools such as AWS S3, EKS, Glue, Airflow, Trino, and Parquet, while preparing to adopt Apache Iceberg for even greater performance and flexibility. You'll tackle high-performance data workloads, ensuring seamless execution of massive queries, including 600+ billion-row queries in Redshift, by designing More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, Apache Spark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great communication More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, Apache Spark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great communication More ❯
Luton, Bedfordshire, United Kingdom Hybrid / WFH Options
Data warehouse operations and tunning experience in schema evolution, indexing, partitioning. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or More ❯
Luton, England, United Kingdom Hybrid / WFH Options
easyJet
indexing, partitioning. Hands-on IaC development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality More ❯
bedford, east anglia, united kingdom Hybrid / WFH Options
easyJet
indexing, partitioning. Hands-on IaC development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality More ❯
luton, bedfordshire, east anglia, united kingdom Hybrid / WFH Options
easyJet
indexing, partitioning. Hands-on IaC development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality More ❯
watford, hertfordshire, east anglia, united kingdom Hybrid / WFH Options
easyJet
indexing, partitioning. Hands-on IaC development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality More ❯
systems. Nice to haves: Experience with NoSQL databases (MongoDB, Cassandra, Redis). Familiarity with message brokers (Kafka, SQS/SNS, RabbitMQ). Knowledge of real-time streaming (Kafka Streams, Apache Flink, etc.). Exposure to big-data or machine-learning frameworks (TensorFlow, PyTorch, Hugging Face, LangChain). Experience working with AI-driven development tools such as Cursor, Copilot, or More ❯
Linux-based, concurrent, high-throughput, low-latency software systems Experience with pipeline orchestration frameworks (e.g. Airflow, Dagster) Experience with streaming platforms (e.g. Kafka), data lake platforms (e.g. Delta Lake, Apache Iceberg), and relational databases Have a Bachelor or advanced degree in Computer Science, Mathematics, Statistics, Physics, Engineering, or equivalent work experience For more information about DRW's processing activities More ❯