Required: Bachelor's degree in Computer Science, Software Engineering, Data Science, or a closely related field. Advantageous: Certifications or substantial hands-on experience with modern data pipeline tools (e.g., ApacheAirflow, Spark, Kafka, dbt, or similar). Desirable: Familiarity with financial services regulatory frameworks (e.g., MiFID II, GDPR, SOX) and best practices for data governance Required Knowledge and … Engineering: Hands-on experience with Java (Spring Boot), React, and Python, covering backend, frontend, and data engineering. Data Engineering Tools: Proficient with modern data engineering and analytics platforms (e.g., ApacheAirflow, Spark, Kafka, dbt, Snowflake, or similar). DevOps & Cloud: Experience with containerisation (Docker, Kubernetes), CI/CD pipelines, and cloud platforms (e.g., AWS, Azure, GCP) is highly More ❯
at least one cloud data platform (e.g. AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). · Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, ApacheAirflow etc · Good knowledge of stream and batch processing solutions like Apache Flink, Apache Kafka/· Good knowledge of log management, monitoring, and More ❯
london, south east england, united kingdom Hybrid/Remote Options
Yapily
. Preferred Skills Python: Knowledge for data automation and scripting. Containerization: Familiarity with tools like Docker and Kubernetes. Workflow/Orchestration Tools: Familiarity with workflow/orchestration tools (e.g., Airflow, Dagster, Prefect). Cloud-based Data Services: Exposure to cloud-based data services (GCP preferred; AWS/Azure also considered). Data Lineage & Metadata Management: Understanding of best practices. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Adecco
you the chance to work on cutting-edge solutions that make a real impact.Key Responsibilities* Data Engineering: Design and implement data pipelines, lakes, and warehouses using tools like Spark, Airflow, or dbt.* API & Microservices Development: Build secure, efficient APIs and microservices for data integration.* Full Stack Development: Deliver responsive, high-performance web applications using React (essential), plus Angular or More ❯
offs. Experience with multiple data formats and serialization systems (e.g. Arrow, Parquet, Protobuf/gRPC, Avro, Thrift, JSON, etc.) Experience managing data pipeline orchestration systems (e.g. Kubernetes, Argo Workflows, Airflow, Prefect, Dagster, etc.) Proven experience in managing the operational aspects of large data pipelines such as backfilling datasets, rerunning batch jobs, and handling dead-letter queues. Prior experience triaging More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
Desirable Skills for the AWS Data Engineer: Experience with Databricks , Kafka , or Kinesis for real-time data streaming Knowledge of containerisation (Docker, ECS) and modern orchestration tools such as Airflow Familiarity with machine learning model deployment pipelines or data lakehouse architectures Data Engineer, AWS Data Engineer More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Arc IT Recruitment
. AWS Platform Build: Demonstrable experience designing and building modern data platforms in AWS. ETL/Orchestration Expertise: Expertise in ETL/ELT design and data orchestration, specifically with Apache Airflow. SQL Mastery: Strong SQL skills with significant experience in query tuning and performance optimisation. Programming Proficiency: Proficiency in Python and Bash (for data processing, scripting, and automation). More ❯
london (brentford), south east england, united kingdom
NBCUniversal
to lead and mentor a team of data engineers Experience in using techniques such as infrastructure as code and CI/CD Experience with graph-based data workflows using ApacheAirflow Programming skills in one or more of the following: Python, Java, Scala, R and experience in writing reusable/efficient code to automate analysis and data processes … Experience in processing large volumes of data using parallelism techniques/tooling, such as Apache Spark Experience in processing structured and unstructured data into a form suitable for analysis and reporting with integration with a variety of data metric providers ranging from advertising, web analytics, and consumer devices Experience in basic Machine Learning techniques is a big plus Experience More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
time and compute costs. Develop modular, reusable transformations using SQL and Python. Implement CI/CD pipelines and manage deployments via Git. Automate workflows using orchestration tools such as Airflow or dbt Cloud. Configure and optimise Snowflake warehouses for performance and cost efficiency. Required Skills & Experience 7+ years in data engineering roles. 3+ years hands-on experience with Snowflake. More ❯
in-person collaboration is crucial at this early stage) You may be a great fit if you have experience with any of the following... Workflow orchestration tooling (e.g. Prefect, Airflow, Dagster) Experience with cloud data warehouses (e.g. BigQuery, Snowflake, Redshift) Data transformation tools (e.g. dbt) and data quality frameworks (e.g. Great Expectations) Backend Python frameworks (e.g. Django, FastAPI, Flask More ❯
and managing cloud infrastructure as code Proficiency in programming languages such as Python, Spark, SQL Strong experience with SQL databases Expertise in data pipeline and workflow management tools (e.g., ApacheAirflow, ADF) Experience with cloud platforms (Azure preferred) and related data services Excellent problem-solving skills and attention to detail Inclusive and curious, continuously seeks to build knowledge More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Robert Half
/ETL processes for improved performance, cost efficiency, and scalability.* Troubleshoot and resolve data pipeline issues swiftly and effectively across the data platform.* Work with orchestration tools such as Airflow, ADF, or Prefect to schedule and automate workflows.* Keep abreast of industry trends and emerging technologies in data engineering, and continuously improve your skills and knowledge. Profile * Minimum More ❯
Engineering: Proficiency in SQL and relational databases (e.g., PostgreSQL, DuckDB). Experience with the modern data stack, building data ingestion pipelines and working with ETL and orchestration tools (e.g., Airflow, Luigi, Argo, dbt), big data technologies (Spark, Kafka, Parquet), and web frameworks for model serving (e.g. Flask or FastAPI). Data Science: Familiarity or experience with classical NLP techniques More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Lorien
data storytelling and operational insights. Optimise data workflows across cloud and on-prem environments, ensuring performance and reliability. Skills & Experience: Strong experience in ETL pipeline development using tools like ApacheAirflow, Informatica, or similar. Advanced SQL skills and experience with large-scale relational and cloud-based databases. Hands-on experience with Tableau for data visualisation and dashboarding. Exposure More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Searchability
are giving express consent for us to process (subject to required skills) your application to our client in conjunction with this vacancy only. KEY SKILLS: GCP | Python | SQL | MongoDB | Airflow | dbt | Terraform | Docker | ETL | AI | Machine Learning More ❯
Machine Learning: Experience supporting and understanding ML pipelines and models in a production setting. Direct experience with Google Cloud Platform, BigQuery, and associated tooling. Experience with workflow tools like Airflow or Kubeflow. Familiarity with dbt (Data Build Tool). Please send your CV for more information on these roles. Reasonable Adjustments: Respect and equality are core values to us. More ❯
and dimensional data modelling (SCDs, fact/dim, conformed dimensions) Experience with PostgreSQL optimisation. Advanced Python skills ETL/ELT Pipelines: Hands-on experience building pipelines using SSIS, dbt, Airflow, or similar Strong understanding of enterprise ETL frameworks, lineage, and data quality Cloud & Infrastructure: Experience designing and supporting AWS-based analytical infrastructure Skilled in working with S3 and integrating More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Oscar Technology
roles. Strong expertise in Azure cloud services and Databricks . Advanced proficiency in Python and SQL for data engineering. Experience with ETL/ELT tools and orchestration frameworks (e.g., Airflow, dbt). Strong knowledge of data warehousing and cloud-based data architectures. Understanding of BI tools such as Power BI or Tableau. Strong problem-solving and debugging skills. Ability More ❯
spatial datasets (MasterMap, AddressBase, Land Registry). Desired: Experience with WMS/WFS services, graph theory (NetworkX), GDAL, and Snowflake. Nice to have: CI/CD and orchestration tools (Airflow, Argo CD), Mapbox/MapLibre, Scala, Streamlit, DuckDB, and Power BI. What you need to do now If you're interested in this role, click 'apply now' to forward More ❯
Desirable Extensive exprience in developing architectural strategies, blueprints for hybrid and cloud-native solutions ELT/ETL Frameworks & Pipelines Essential Develop robust ELT/ETL pipelines using tools like ApacheAirflow, DBT, AWS Glue, Azure Data Factory, or Kafka Connect. Desirable Optimize data transformations for performance, reusability, and modular design (e.g., using SQL/Scala/Python). More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Harnham - Data & Analytics Recruitment
ingestion pipelines using Azure Data Factory (ADF) and Python . Ensure high-quality raw datasets to enable accurate analytics and data modeling. Deploy and maintain data tools on Kubernetes (Airflow, Superset, RStudio Connect). Support Data Analytics initiatives through DBT , DevOps , and deployment governance. You will work closely with a small, focused team, contributing directly to strategic data initiatives. More ❯
Build CI/CD pipelines for ML models, agents, and GenAI applications. Deploy LLMs and orchestrate multi-agent systems (LangChain, LangGraph, or custom). Create and maintain data pipelines (Airflow, dbt, Step Functions). Implement monitoring, observability, and automated retraining (CloudWatch, Prometheus, Grafana, MLflow). Embed strong security, compliance, and guardrail practices across all ML workflows. Mentor junior engineers More ❯
warehousing and transformation. Strong SQL skills and understanding of modern data architecture principles. Hands-on experience with Tableau for enterprise-grade dashboard development. Familiarity with orchestration tools (e.g., Prefect, Airflow), data quality frameworks, and metadata tools. Proficiency in Git, CI/CD, and scripting with Python. Excellent communication skills and ability to work collaboratively across technical and business teams. More ❯
environments, including ERPs and CRMs. You'll collaborate closely with client stakeholders to translate ambiguous requirements into clean, maintainable solutions that drive real impact. Familiarity with tools such as Airflow, DBT, Databricks, dashboarding frameworks, and Typescript is a strong plus as you help deliver end-to-end production-ready systems. Interview Process Teams conversation (introductory chat) Technical take home More ❯
Banbury, Oxfordshire, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
Databricks experience, including Unity Catalog. Strong skills in Python, Spark, SQL and experience with SQL databases. Terraform experience for cloud infrastructure as code. Experience with Azure and workflow tools (Airflow, ADF). Excellent problem-solving ability, communication skills, and attention to detail. Experience across Waterfall and Agile methodologies. Curious, inclusive, and committed to continuous learning. To apply for this More ❯