robust way possible! Diverse training opportunities and social benefits (e.g. UK pension schema) What do you offer? Strong hands-on experience working with modern Big Data technologies such as Apache Spark, Trino, Apache Kafka, Apache Hadoop, Apache HBase, Apache Nifi, Apache Airflow, Opensearch Proficiency in cloud-native technologies such as containerization and Kubernetes Strong More ❯
production issues. Optimize applications for performance and responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like Apache Spark, Databricks, Apache Pulsar, Apache Airflow, Temporal, and Apache Flink, sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes, and … Experience with cloud platforms like AWS, GCP, or Azure. DevOps Tools: Familiarity with containerization (Docker) and infrastructure automation tools like Terraform or Ansible. Real-time Data Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with Apache Spark, Databricks, or similar big data platforms for processing … large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with Apache Flink or other stream processing frameworks is a plus. Desired Skills Asynchronous Programming: Familiarity with asynchronous programming tools like Celery or asyncio. Frontend Knowledge: Exposure More ❯
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and Apache Spark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns, ETL More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and Apache Spark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns, ETL More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
customer data Continuously improve existing systems, introducing new technologies and methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and Apache Spark, including performance tuning and advanced concepts such as Delta Lake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing and More ❯
Azure. AWS experience is considered, however Azure exposure is essential. Data Warehousing: Proven expertise with Snowflake – schema design, performance tuning, data ingestion, and security. Workflow Orchestration: Production experience with Apache Airflow (Prefect, Dagster or similar), including authoring DAGs, scheduling workloads and monitoring pipeline execution. Data Modeling: Strong skills in dbt, including writing modular SQL transformations, building data models, and More ❯
Azure. AWS experience is considered, however Azure exposure is essential. Data Warehousing: Proven expertise with Snowflake – schema design, performance tuning, data ingestion, and security. Workflow Orchestration: Production experience with Apache Airflow (Prefect, Dagster or similar), including authoring DAGs, scheduling workloads and monitoring pipeline execution. Data Modeling: Strong skills in dbt, including writing modular SQL transformations, building data models, and More ❯
Luton, England, United Kingdom Hybrid/Remote Options
easyJet
indexing, partitioning. Hands-on IaC development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality More ❯
. AWS Platform Build: Demonstrable experience designing and building modern data platforms in AWS. ETL/Orchestration Expertise: Expertise in ETL/ELT design and data orchestration, specifically with Apache Airflow. SQL Mastery: Strong SQL skills with significant experience in query tuning and performance optimisation. Programming Proficiency: Proficiency in Python and Bash (for data processing, scripting, and automation). More ❯
City of London, Greater London, UK Hybrid/Remote Options
ARC IT Recruitment
. AWS Platform Build: Demonstrable experience designing and building modern data platforms in AWS. ETL/Orchestration Expertise: Expertise in ETL/ELT design and data orchestration, specifically with Apache Airflow. SQL Mastery: Strong SQL skills with significant experience in query tuning and performance optimisation. Programming Proficiency: Proficiency in Python and Bash (for data processing, scripting, and automation). More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Arc IT Recruitment
. AWS Platform Build: Demonstrable experience designing and building modern data platforms in AWS. ETL/Orchestration Expertise: Expertise in ETL/ELT design and data orchestration, specifically with Apache Airflow. SQL Mastery: Strong SQL skills with significant experience in query tuning and performance optimisation. Programming Proficiency: Proficiency in Python and Bash (for data processing, scripting, and automation). More ❯
and cloud data platforms (AWS, Azure, or GCP). Proficiency in Python and SQL for data manipulation and transformation. Experience with ETL/ELT development and orchestration tools (e.g., Apache Airflow, dbt, Prefect). Knowledge of data modelling, data warehousing, and lakehouse architectures. Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code. Strong problem-solving skills More ❯
and cloud data platforms (AWS, Azure, or GCP). Proficiency in Python and SQL for data manipulation and transformation. Experience with ETL/ELT development and orchestration tools (e.g., Apache Airflow, dbt, Prefect). Knowledge of data modelling, data warehousing, and lakehouse architectures. Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code. Strong problem-solving skills More ❯
Azure, or GCP, with hands-on experience in cloud-based data services. Proficiency in SQL and Python for data manipulation and transformation. Experience with modern data engineering tools, including Apache Spark, Kafka, and Airflow. Strong understanding of data modelling, schema design, and data warehousing concepts. Familiarity with data governance, privacy, and compliance frameworks (e.g., GDPR, ISO27001). Hands-on More ❯
Luton, England, United Kingdom Hybrid/Remote Options
easyJet
field. Technical Skills Required • Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). • Experience with Apache Spark or any other distributed data programming frameworks. • Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. • Experience with cloud infrastructure like AWS or More ❯
skills We're excited if you have 7+ years of experience delivering multi tier, highly scalable, distributed web applications Experience working with Distributed computing frameworks knowledge: Hive/Hadoop, Apache Spark, Kafka, Airflow Working with programming languages Python , Java, SQL. Working on building ETL (Extraction Transformation and Loading) solution using PySpark Experience in SQL/NoSQL database design Deep More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, Apache Spark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great communication More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, Apache Spark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great communication More ❯
warehousing, and big data (Hadoop, Spark). Proficient in Python, Java, or Scala with solid OOP and design pattern understanding. Expertise in ETL tools, DevOps and orchestration frameworks (Airflow, Apache NiFi). Hands-on experience with cloud platforms (AWS, Azure, or GCP) and their data services. More ❯
Greater Manchester, England, United Kingdom Hybrid/Remote Options
Searchability®
Development Opportunities Enhanced Maternity & Paternity Charity Volunteer Days Cycle to work scheme And More.. DATA ENGINEER – ESSTENTIAL SKILLS Proven experience building data pipelines using Databricks . Strong understanding of Apache Spark (PySpark or Scala) and Structured Streaming . Experience working with Kafka (MSK) and handling real-time data . Good knowledge of Delta Lake/Delta Live Tables and More ❯
AWS (S3, Lambda, Glue, Redshift) and/or Azure (Data Lake, Synapse). Programming & Scripting:Proficiency in Python, SQL, PySpark etc. ETL/ELT & Streaming:Expertise in technologies like Apache Airflow, Glue, Kafka, Informatica, EventBridge etc. Industrial Data Integration:Familiarity with OT data schema originating from OSIsoft PI, SCADA, MES, and Historian systems. Information Modeling:Experience in defining semantic More ❯
and Responsibilities While in this position your duties may include but are not limited to: Support the design, development, and maintenance of scalable data pipelines using tools such as Apache Airflow, dbt, or Azure Data Factory. Learn how to ingest, transform, and load data from a variety of sources, including APIs, databases, and flat files. Assist in the setup More ❯
have framework experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as Python More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Solirius Reply
have framework experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as Python More ❯
implement data processing environments using AWS services such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Build data processing pipelines using Python, SQL, PySpark, Spark, Lambda, Step Functions and Apache Airflow Monitor performance, cost and security of the data platform Apply strong data governance and access control for sensitive data Collaborate with multiple teams to support the wider data More ❯