London, England, United Kingdom Hybrid / WFH Options
DATAPAO
and hiring efforts. What does it take to fit the bill? Technical Expertise 5+ years in Data Engineering , focusing on cloud platforms (AWS, Azure, GCP); Proven experience with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); Extensive ETL/ELT and data pipeline orchestration experience (e.g., Databricks Workflows, DLT, Airflow, ADF, Glue, Step Functions); Proficiency in SQL and Python for More ❯
London, England, United Kingdom Hybrid / WFH Options
Datapao
Expertise You (ideally) have 5+ years of experience in Data Engineering , with a focus on cloud platforms (AWS, Azure, GCP); You have a proven track record working with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); You have extensive experience in ETL/ELT development and data pipeline orchestration (e.g., Databricks Workflows, DLT, Airflow, ADF, Glue, and Step Functions.); You More ❯
team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … make a significant impact, we encourage you to apply! Job Responsibilities ETL/ELT Pipeline Development: Design, develop, and optimize efficient and scalable ETL/ELT pipelines using Python, PySpark, and Apache Airflow. Implement batch and real-time data processing solutions using Apache Spark. Ensure data quality, governance, and security throughout the data lifecycle. Cloud Data Engineering: Manage and … and documentation. Required profile: Requirements Client facing role so strong communication and collaboration skills are vital Proven experience in data engineering, with hands-on expertise in Azure Data Services, PySpark, Apache Spark, and Apache Airflow. Strong programming skills in Python and SQL, with the ability to write efficient and maintainable code. Deep understanding of Spark internals, including RDDs, DataFrames More ❯
London, England, United Kingdom Hybrid / WFH Options
Circana
team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … make a significant impact, we encourage you to apply! Job Responsibilities ETL/ELT Pipeline Development: Design, develop, and optimize efficient and scalable ETL/ELT pipelines using Python, PySpark, and Apache Airflow. Implement batch and real-time data processing solutions using Apache Spark. Ensure data quality, governance, and security throughout the data lifecycle. Cloud Data Engineering: Manage and … version control, and documentation. Requirements Client facing role so strong communication and collaboration skills are vital Proven experience in data engineering, with hands-on expertise in Azure Data Services, PySpark, Apache Spark, and Apache Airflow. Strong programming skills in Python and SQL, with the ability to write efficient and maintainable code. Deep understanding of Spark internals, including RDDs, DataFrames More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
building scalable, reliable data pipelines, managing data infrastructure, and supporting data products across various cloud environments, primarily Azure. Key Responsibilities: Develop end-to-end data pipelines using Python, Databricks, PySpark, and SQL. Integrate data from various sources including APIs, Excel, CSV, JSON, and databases. Manage data lakes, warehouses, and lakehouses within Azure cloud environments. Apply data modelling techniques such More ❯
data engineering and reporting, including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured, maintainable systems. Strong communication skills More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Scott Logic
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options and finding effective solutions. An More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Scott Logic
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options and finding effective solutions. An More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Scott Logic
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options and finding effective solutions. An More ❯
London, England, United Kingdom Hybrid / WFH Options
Scott Logic Ltd
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering. You’re a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and More ❯
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured, maintainable systems. Strong communication skills More ❯
London, England, United Kingdom Hybrid / WFH Options
Locus Robotics
Locus Robotics is a global leader in warehouse automation, delivering unmatched flexibility and unlimited throughput, and actionable intelligence to optimize operations. Powered by LocusONE, an AI-driven platform, our advanced autonomous mobile robots seamlessly integrate into existing warehouse environments to More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
Experience Proven experience as a Data Engineer working in cloud- environments (AWS ) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding of data modeling, ETL frameworks More ❯
London, England, United Kingdom Hybrid / WFH Options
Artefact
Social network you want to login/join with: Artefact is a new generation of data service provider, specialising in data consulting and data-driven digital marketing, dedicated to transforming data into business impact across the entire value chain of More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Somerset Bridge Group
with large-scale datasets using Azure Data Factory (ADF) and Databricks. Strong proficiency in SQL (T-SQL, Spark SQL) for data extraction, transformation, and optimisation. Proficiency in Azure Databricks (PySpark, Delta Lake, Spark SQL) for big data processing. Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics. Experience working with Delta Lake for schema … evolution, ACID transactions, and time travel in Databricks. Strong Python (PySpark) skills for big data processing and automation. Experience with Scala (optional but preferred for advanced Spark applications). Experience working with Databricks Workflows & Jobs for data orchestration. Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference. Experience with data More ❯
UK. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … desire to make a significant impact, we encourage you to apply! Job Responsibilities Data Engineering & Data Pipeline Development Design, develop, and optimize scalable DATA workflows using Python, PySpark, and Airflow Implement real-time and batch data processing using Spark Enforce best practices for data quality, governance, and security throughout the data lifecycle Ensure data availability, reliability and performance through … Implement CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Build and optimize large-scale data processing pipelines using Apache Spark and PySpark Implement data partitioning, caching, and performance tuning for Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning initiatives. Workflow Orchestration More ❯
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Osmii
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Osmii
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
london, south east england, united kingdom Hybrid / WFH Options
Osmii
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Osmii
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Osmii
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
London, England, United Kingdom Hybrid / WFH Options
Osmii
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
Wakefield, Yorkshire, United Kingdom Hybrid / WFH Options
Flippa.com
Continuous integration/deployments, (CI/CD) automation, rigorous code reviews, documentation as communication. Preferred Qualifications Familiar with data manipulation and experience with Python libraries like Flask, FastAPI, Pandas, PySpark, PyTorch, to name a few. Proficiency in statistics and/or machine learning libraries like NumPy, matplotlib, seaborn, scikit-learn, etc. Experience in building ETL/ELT processes and More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
ll need to apply: Excellent experience as a Senior Data Engineer, with some experience mentoring others Excellent Python and SQL skills, with hands-on experience building pipelines in Spark (PySpark preferred) Experience with cloud platforms (AWS/Azure) Solid understanding of data architecture, modelling, and ETL/ELT pipelines Experience using tools like Databricks, Redshift, Snowflake, or similar Comfortable More ❯