data quality, or other areas directly relevant to data engineering responsibilities and tasks Proven project experience developing and maintaining data warehouses in big data solutions (Snowflake) Expert knowledge in Apache technologies such as Kafka, Airflow, and Spark to build scalable and efficient data pipelines Ability to design, build, and deploy data solutions that capture, explore, transform, and utilize data More ❯
of data modelling and data warehousing concepts Familiarity with version control systems, particularly Git Desirable Skills: Experience with infrastructure as code tools such as Terraform or CloudFormation Exposure to Apache Spark for distributed data processing Familiarity with workflow orchestration tools such as Airflow or AWS Step Functions Understanding of containerisation using Docker Experience with CI/CD pipelines and More ❯
data engineering tasks. Experience building and maintaining web scraping pipelines. Strong SQL skills, with expertise in performance tuning. Strong proficiency with dbt for data transformations. Hands-on experience with Apache Airflow or Prefect. Proficiency with GitHub, GitHub Actions, and CI/CD pipelines. Nice to have: Experience with GCP (BigQuery, Dataflow, Composer, Pub/Sub) or AWS. Familiarity with More ❯
london, south east england, united kingdom Hybrid / WFH Options
iO Associates
Skills & Experience: Strong experience with Snowflake data warehousing Solid AWS cloud engineering experience Proficient in Python for data engineering workflows Skilled in building and maintaining Airflow DAGs Familiarity with Apache Iceberg for table format and data lake optimisation If this could be of interest, please get in touch with Alex Lang at iO Associates to apply and for more More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
iO Associates
Skills & Experience: Strong experience with Snowflake data warehousing Solid AWS cloud engineering experience Proficient in Python for data engineering workflows Skilled in building and maintaining Airflow DAGs Familiarity with Apache Iceberg for table format and data lake optimisation If this could be of interest, please get in touch with Alex Lang at iO Associates to apply and for more More ❯
and maintenance of IDBS's software platforms adheres to IDBS's architecture vision. What We'll Get You Doing Design, develop, and maintain scalable data pipelines using Databricks and Apache Spark (PySpark) to support analytics and other data-driven initiatives. Support the elaboration of requirements, formulation of the technical implementation plan and backlog refinement. Provide technical perspective to products More ❯
and managing cloud infrastructure as code Proficiency in programming languages such as Python, Spark, SQL Strong experience with SQL databases Expertise in data pipeline and workflow management tools (e.g., Apache Airflow, ADF) Experience with cloud platforms (Azure preferred) and related data services Excellent problem-solving skills and attention to detail Inclusive and curious, continuously seeks to build knowledge and More ❯
of automation IT WOULD BE NICE FOR THE SENIOR SOFTWARE ENGINEER TO HAVE.... Cloud based experience Microservice architecture or server-less architecture Big Data/Messaging technologies such as Apache Nifi/MiNiFi/Kafka TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For further information please call me on . I can More ❯
of automation IT WOULD BE NICE FOR THE SENIOR SOFTWARE ENGINEER TO HAVE.... Cloud based experience Microservice architecture or server-less architecture Big Data/Messaging technologies such as Apache Nifi/MiNiFi/Kafka TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For further information please call me on . I can More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Datatech Analytics
GCP/BigQuery or other cloud data warehouses (e.g., Snowflake, Redshift). Familiarity with data orchestration tools (e.g., Airflow). Experience with data visualisation platforms (such as Preset.io/Apache Superset or other). Exposure to CI/CD pipelines, ideally using GitLab CI. Background working with media, marketing, or advertising data. The Opportunity : Work alongside smart, supportive teammates More ❯
london, south east england, united kingdom Hybrid / WFH Options
Datatech Analytics
GCP/BigQuery or other cloud data warehouses (e.g., Snowflake, Redshift). Familiarity with data orchestration tools (e.g., Airflow). Experience with data visualisation platforms (such as Preset.io/Apache Superset or other). Exposure to CI/CD pipelines, ideally using GitLab CI. Background working with media, marketing, or advertising data. The Opportunity : Work alongside smart, supportive teammates More ❯
london (city of london), south east england, united kingdom
iO Associates
ideal candidate will lead the delivery of modern data solutions across multiple projects, leveraging Azure and Databricks technologies in an agile environment. Core Requirements Cloud & Data Engineering: Azure, Databricks, Apache Spark, Azure Data Factory, Delta Lake Programming & Querying: Python, SQL (complex, high-performance queries) Data Governance & DevOps: Unity Catalog/Purview, Terraform, Azure DevOps Consulting: Requirements gathering, stakeholder engagement More ❯
Winchester, Hampshire, England, United Kingdom Hybrid / WFH Options
Ada Meher
days a week – based business need. To Be Considered: Demonstrable expertise and experience working on large-scale Data Engineering projects Strong experience in Python/PySpark, Databricks & Apache Spark Hands on experience with both batch & streaming pipelines Strong experience in AWS and associated tooling (Eg, S3, Glue, Redshift, Lambda, Terraform etc) Experience designing Data Engineering platforms from scratch Alongside More ❯
. Solid understanding of DevOps principles and agile delivery. Excellent problem-solving skills and a proactive, team-oriented approach. Confident client-facing communication skills. Desirable Skills & Experience Experience with Apache NiFi and Node.js . Familiarity with JSON, XML, XSD, and XSLT . Knowledge of Jenkins, Maven, BitBucket, and Jira . Exposure to AWS and cloud technologies. Experience working within More ❯
JUnit 5, Mockito, database integration. AI: LangChain, Retrieval-Augmented Generation (RAG), MCP servers (as consumer and developer), and prompt engineering for LLM optimization. Exposure to popular libraries and frameworks (Apache Commons, Guava, Swagger, TestContainers). Architecture & Platforms: Skilled in designing and deploying distributed systems on cloud hyperscalers (AWS, GCP). Familiarity with containerization (Docker), CI/CD pipelines, DevOps More ❯
JUnit 5, Mockito, database integration. AI: LangChain, Retrieval-Augmented Generation (RAG), MCP servers (as consumer and developer), and prompt engineering for LLM optimization. Exposure to popular libraries and frameworks (Apache Commons, Guava, Swagger, TestContainers). Architecture & Platforms:Skilled in designing and deploying distributed systems on cloud hyperscalers (AWS, GCP). Familiarity with containerization (Docker), CI/CD pipelines, DevOps More ❯
JUnit 5, Mockito, database integration. AI: LangChain, Retrieval-Augmented Generation (RAG), MCP servers (as consumer and developer), and prompt engineering for LLM optimization. Exposure to popular libraries and frameworks (Apache Commons, Guava, Swagger, TestContainers). Architecture & Platforms: Skilled in designing and deploying distributed systems on cloud hyperscalers (AWS, GCP). Familiarity with containerization (Docker), CI/CD pipelines, DevOps More ❯
Deep Learning or LLM Frameworks) Desirable Minimum 2 years experience in Data related field Minimum 2 years in a Business or Management Consulting field Experience of Docker, Hadoop, PySpark, Apache or MS Azure Minimum 2 years NHS/Healthcare experience Disclosure and Barring Service Check This post is subject to the Rehabilitation of Offenders Act (Exceptions Order) 1975 and More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
Winchester, Hampshire, England, United Kingdom Hybrid / WFH Options
Ada Meher
days a week – based business need. To Be Considered: Demonstrable expertise and experience working on large-scale Data Engineering projects Strong experience in Python/PySpark, Databricks & Apache Spark Hands on experience with both batch & streaming pipelines Strong experience in AWS and associated tooling (Eg, S3, Glue, Redshift, Lambda, Terraform etc) Experience designing Data Engineering platforms from scratch Alongside More ❯
Saffron Walden, Essex, South East, United Kingdom Hybrid / WFH Options
EMBL-EBI
advantagioous Good communication skills Experience in Python and/or Java development Experience in git and basic Unix Commands You may also have Experience with large data processing technologies (Apache Spark) Other helpful information Hybrid Working: At EMBL-EBI we are pleased to offer hybrid working options for all our employees. Our team work at least two days on More ❯
S3, VPC, IAM, CloudFormation/Terraform, and backups. Set up and improve DevOps pipelines: CI/CD, Docker, Kubernetes, automated tests, load testing, and web/application servers (e.g. Apache). Troubleshoot effectively using CLI tools (bash). Guide integrations with external CAFM/CRM systems via APIs. Apply a strong ownership mentality — taking features from design through deployment More ❯
Technical Expertise: Solid experience in Python programming, particularly using data manipulation and processing libraries such as Pandas, NumPy, and Apache Spark. Hands-on experience with open-source data frameworks like Apache Spark, Apache Kafka, and Apache Airflow. Strong proficiency in SQL, including advanced query development and performance tuning. Good understanding of distributed computing principles and big … automation pipelines. Experience working with relational databases such as PostgreSQL, MySQL, or equivalent platforms. Skilled in using containerization technologies including Docker and Kubernetes. Experience with workflow orchestration tools like Apache Airflow or Dagster. Familiar with streaming data pipelines and real-time analytics solutions. More ❯