Key Responsibilities Lead the design, development, and maintenance of scalable, high-performance data pipelines on Databricks. Architect and implement data ingestion, transformation, and integration workflows using PySpark, SQL, and Delta Lake. Guide the team in migrating legacy ETL processes to modern cloud-based data pipelines. Ensure data accuracy, schema consistency, row counts, and KPIs during migration and transformation. Collaborate … and analytics. ________________________________________ Required Skills & Qualifications 10-12 years of experience in data engineering, with at least 3+ years in a technical lead role. Strong expertise in Databricks , PySpark , and DeltaLake . DBT Advanced proficiency in SQL, ETL/ELT pipelines, and data modelling. Experience with Azure Data Services (ADLS, ADF, Synapse) or other major cloud platforms (AWS … of data warehousing , transformation logic , SLAs, and dependencies. Hands-on experience with real-time streaming near-realtime batch is a plus., optimisation of data bricks and DBT workload and DeltaLake Familiarity with CI/CD pipelines, DevOps practices, and Git-based workflows. Knowledge of data security, encryption, and compliance frameworks (GDPR, SOC2, ISO ).good to have Excellent More ❯
Birmingham, West Midlands, England, United Kingdom Hybrid / WFH Options
MYO Talent
Data Engineer/Data Engineering/Data Consultant/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area, Permanent role, £ + car/allowance (£5,000) + 15% bonus. One of our leading clients is looking to recruit … role Salary £ + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
per day, 100% remote working and initial contract until end of 2025. Key Duties and Responsibilities Develop and optimise PySpark batch pipelines that process Parquet data and use DeltaLake for all IO, applying validation, enrichment, and billing calculation logic directly in PySpark code. Build reliable PySpark jobs that read/write Delta tables on ADLS Gen2. … with orchestrators (ADF or Container App Orchestrator) and CI/CD pipelines (GitHub Actions) Operate securely within private-network Azure environments (Managed Identity, RBAC, Private Endpoints). PySpark with DeltaLake (structured APIs, MERGE, schema evolution). Solid knowledge of Azure Synapse Spark pools or Databricks, ADLS Gen2, and Azure SQL. Familiarity with ADF orchestration and containerised Spark More ❯
remote working and initial contract until end of 2025, must be CTC cleared. Key Duties and Responsibilities Develop and optimise PySpark batch pipelines that process Parquet data and use DeltaLake for all IO, applying validation, enrichment, and billing calculation logic directly in PySpark code. Build reliable PySpark jobs that read/write Delta tables on ADLS … with orchestrators (ADF or Container App Orchestrator) and CI/CD pipelines (GitHub Actions) Operate securely within private-network Azure environments (Managed Identity, RBAC, Private Endpoints). PySpark with DeltaLake (structured APIs, MERGE, schema evolution). Solid knowledge of Azure Synapse Spark pools or Databricks, ADLS Gen2, and Azure SQL. Familiarity with ADF orchestration and containerised Spark More ❯
City of London, London, United Kingdom Hybrid / WFH Options
ECS
engineering, with a strong focus on building scalable data pipelines Expertise in Azure Databricks, including building and managing ETL pipelines using PySpark or Scala Solid understanding of Apache Spark, DeltaLake, and distributed data processing concepts Hands-on experience with Azure Data Lake Storage, Azure Data Factory, and Azure Synapse Analytics Proficiency in SQL and Python for More ❯
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You’ll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Robert Half
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You’ll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
london, south east england, united kingdom Hybrid / WFH Options
Robert Half
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You’ll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Robert Half
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You’ll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Robert Half
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You’ll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You'll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
Lambda, EMR) Strong communication skills and a collaborative mindset Comfortable working in Agile environments and engaging with stakeholders Bonus Skills Experience with Apache Iceberg or similar table formats (e.g., DeltaLake, Hudi) Exposure to CI/CD tools like GitHub Actions, GitLab CI, or Jenkins Familiarity with data quality frameworks such as Great Expectations or Deequ Interest in More ❯
Herefordshire, West Midlands, United Kingdom Hybrid / WFH Options
IO Associates
and maintain platform software, libraries, and dependencies . Set up and manage Spark clusters , including migrations to new platforms. Manage user accounts and permissions across identity platforms. Maintain the DeltaLake and ensure platform-wide security standards. Collaborate with the wider team to advise on system design and delivery . What we're looking for: Strong Linux engineering More ❯
per day, 100% remote working and initial contract until end of 2025. Key Duties and Responsibilities Develop and optimise PySpark batch pipelines that process Parquet data and use DeltaLake for all IO, applying validation, click apply for full job details More ❯