in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising end-to-end data pipelines using Azure Databricks, PySpark, ADF, and DeltaLake Implementing a medallion architecture - from raw to enriched to curated Working with DeltaLake and Spark for both batch and streaming data Collaborating with analysts More ❯
develop, and maintain ETL/ELT pipelines using Airflow for orchestration and scheduling. Build and manage data transformation workflows in DBT running on Databricks . Optimize data models in DeltaLake for performance, scalability, and cost efficiency. Collaborate with analytics, BI, and data science teams to deliver clean, reliable datasets. Implement data quality checks (dbt tests, monitoring) and … Data Engineer (or similar). Hands-on expertise with: DBT (dbt-core, dbt-databricks adapter, testing & documentation). Apache Airflow (DAG design, operators, scheduling, dependencies). Databricks (Spark, SQL, DeltaLake, job clusters, SQL Warehouses). Strong SQL skills and understanding of data modeling (Kimball, Data Vault, or similar) . Proficiency in Python for scripting and pipeline development. More ❯
Key Responsibilities Lead the design, development, and maintenance of scalable, high-performance data pipelines on Databricks. Architect and implement data ingestion, transformation, and integration workflows using PySpark, SQL, and Delta Lake. Guide the team in migrating legacy ETL processes to modern cloud-based data pipelines. Ensure data accuracy, schema consistency, row counts, and KPIs during migration and transformation. Collaborate … and analytics. ________________________________________ Required Skills & Qualifications 10-12 years of experience in data engineering, with at least 3+ years in a technical lead role. Strong expertise in Databricks , PySpark , and DeltaLake . DBT Advanced proficiency in SQL, ETL/ELT pipelines, and data modelling. Experience with Azure Data Services (ADLS, ADF, Synapse) or other major cloud platforms (AWS … of data warehousing , transformation logic , SLAs, and dependencies. Hands-on experience with real-time streaming near-realtime batch is a plus., optimisation of data bricks and DBT workload and DeltaLake Familiarity with CI/CD pipelines, DevOps practices, and Git-based workflows. Knowledge of data security, encryption, and compliance frameworks (GDPR, SOC2, ISO ).good to have Excellent More ❯
Birmingham, West Midlands, England, United Kingdom Hybrid / WFH Options
MYO Talent
Data Engineer/Data Engineering/Data Consultant/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area, Permanent role, £ + car/allowance (£5,000) + 15% bonus. One of our leading clients is looking to recruit … role Salary £ + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
machine learning models in production. What you'll be doing as Lead ML Ops Engineer: Leading the design and implementation of robust ML Ops pipelines using Azure, Databricks, and DeltaLake Architecting and overseeing API services and caching layers (e.g., Azure Cache for Redis) Driving integration with cloud-based data storage solutions such as Snowflake Collaborating with data … from the Machine Learning Operations Lead: Proven experience in ML Ops leadership, with deep expertise in Azure, Databricks, and cloud-native architectures Strong understanding of Postgres, Redis, Snowflake, and DeltaLake Architecture Hands-on experience with Docker, container orchestration, and scalable API design Excellent communication and stakeholder management skills Ability to drive strategic initiatives and influence technical direction More ❯
Employment Type: Permanent
Salary: £70000 - £90000/annum 25+bank, bonus + more
machine learning models in production. What you'll be doing as Lead ML Ops Engineer: Leading the design and implementation of robust ML Ops pipelines using Azure, Databricks, and DeltaLake Architecting and overseeing API services and caching layers (e.g., Azure Cache for Redis) Driving integration with cloud-based data storage solutions such as Snowflake Collaborating with data … from the Machine Learning Operations Lead: Proven experience in ML Ops leadership, with deep expertise in Azure, Databricks, and cloud-native architectures Strong understanding of Postgres, Redis, Snowflake, and DeltaLake Architecture Hands-on experience with Docker, container orchestration, and scalable API design Excellent communication and stakeholder management skills Ability to drive strategic initiatives and influence technical direction More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
london (city of london), south east england, united kingdom
Mastek
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
City of London, London, United Kingdom Hybrid / WFH Options
ECS
engineering, with a strong focus on building scalable data pipelines Expertise in Azure Databricks, including building and managing ETL pipelines using PySpark or Scala Solid understanding of Apache Spark, DeltaLake, and distributed data processing concepts Hands-on experience with Azure Data Lake Storage, Azure Data Factory, and Azure Synapse Analytics Proficiency in SQL and Python for More ❯
data warehousing, or analytics engineering Strong SQL and Python skills with hands-on experience in PySpark Exposure to Azure Databricks, Microsoft Fabric, or similar cloud data platforms Understanding of DeltaLake, Git, and CI/CD workflows Experience with relational data modelling and dimensional modelling Awareness of data governance tools such as Purview or Unity Catalog Excellent analytical More ❯
and minimize complexity Exceptional interpersonal skills - you communicate clearly with stakeholders as well as other engineers, fostering a collaborative, supportive working environment Experience in the financial markets, especially in delta one, store of value, and/or FICC options trading Experience with Linux-based, concurrent, high-throughput, low-latency software systems Experience with pipeline orchestration frameworks (e.g. Airflow, Dagster … Experience with streaming platforms (e.g. Kafka), data lake platforms (e.g. DeltaLake, Apache Iceberg), and relational databases Have a Bachelor or advanced degree in Computer Science, Mathematics, Statistics, Physics, Engineering, or equivalent work experience For more information about DRW's processing activities and our use of job applicants' data, please view our Privacy Notice at More ❯
Define solution architectures and oversee end-to-end data pipelines, ETL/ELT processes, and data platform implementations. Guide teams in applying best practices across Python, SQL, Databricks, and DeltaLake to deliver high-performance outcomes. Manage client programmes, ensuring projects are delivered on time, within scope, and aligned to strategic goals. Promote engineering standards, agile ways of … CD adoption across teams. Act as a trusted advisor, simplifying technical concepts and communicating clearly with business stakeholders. Develop and maintain data pipelines using Azure ADF, Databricks, PySpark, and Delta Lake. Build and optimise workflows in Python and SQL to support supply chain, sales, and marketing analytics. Contribute to CI/CD pipelines using GitHub Actions (or similar) for More ❯
Define solution architectures and oversee end-to-end data pipelines, ETL/ELT processes, and data platform implementations. Guide teams in applying best practices across Python, SQL, Databricks, and DeltaLake to deliver high-performance outcomes. Manage client programmes, ensuring projects are delivered on time, within scope, and aligned to strategic goals. Promote engineering standards, agile ways of … CD adoption across teams. Act as a trusted advisor, simplifying technical concepts and communicating clearly with business stakeholders. Develop and maintain data pipelines using Azure ADF, Databricks, PySpark, and Delta Lake. Build and optimise workflows in Python and SQL to support supply chain, sales, and marketing analytics. Contribute to CI/CD pipelines using GitHub Actions (or similar) for More ❯
Define solution architectures and oversee end-to-end data pipelines, ETL/ELT processes, and data platform implementations. Guide teams in applying best practices across Python, SQL, Databricks, and DeltaLake to deliver high-performance outcomes. Manage client programmes, ensuring projects are delivered on time, within scope, and aligned to strategic goals. Promote engineering standards, agile ways of … CD adoption across teams. Act as a trusted advisor, simplifying technical concepts and communicating clearly with business stakeholders. Develop and maintain data pipelines using Azure ADF, Databricks, PySpark, and Delta Lake. Build and optimise workflows in Python and SQL to support supply chain, sales, and marketing analytics. Contribute to CI/CD pipelines using GitHub Actions (or similar) for More ❯
Define solution architectures and oversee end-to-end data pipelines, ETL/ELT processes, and data platform implementations. Guide teams in applying best practices across Python, SQL, Databricks, and DeltaLake to deliver high-performance outcomes. Manage client programmes, ensuring projects are delivered on time, within scope, and aligned to strategic goals. Promote engineering standards, agile ways of … CD adoption across teams. Act as a trusted advisor, simplifying technical concepts and communicating clearly with business stakeholders. Develop and maintain data pipelines using Azure ADF, Databricks, PySpark, and Delta Lake. Build and optimise workflows in Python and SQL to support supply chain, sales, and marketing analytics. Contribute to CI/CD pipelines using GitHub Actions (or similar) for More ❯
london (city of london), south east england, united kingdom
MathCo
Define solution architectures and oversee end-to-end data pipelines, ETL/ELT processes, and data platform implementations. Guide teams in applying best practices across Python, SQL, Databricks, and DeltaLake to deliver high-performance outcomes. Manage client programmes, ensuring projects are delivered on time, within scope, and aligned to strategic goals. Promote engineering standards, agile ways of … CD adoption across teams. Act as a trusted advisor, simplifying technical concepts and communicating clearly with business stakeholders. Develop and maintain data pipelines using Azure ADF, Databricks, PySpark, and Delta Lake. Build and optimise workflows in Python and SQL to support supply chain, sales, and marketing analytics. Contribute to CI/CD pipelines using GitHub Actions (or similar) for More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Robert Half
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You’ll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You’ll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
london, south east england, united kingdom Hybrid / WFH Options
Robert Half
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You’ll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Robert Half
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You’ll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Robert Half
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You’ll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You'll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯