Manchester Area, United Kingdom Hybrid / WFH Options
POWWR
based SaaS data platform. Data Platform Engineering: Design, build, and orchestrate modular, declarative data pipelines using dbt , Dagster , and dlt within a Kubernetes-hosted environment . Develop and optimize DeltaLake models in Databricks/Synapse and Azure Data Lake Gen2 , ensuring scalability and ACID compliance. Maintain and enhance existing data pipelines (SQL Server, Azure Data Factory … engineering, BI, and architecture teams as well as business stakeholders. Bonus Skills Knowledge of data governance and lineage tools (e.g., OpenMetadata, DataHub). Use of table formats such as DeltaLake or Iceberg. Experience with data quality frameworks (e.g., dbt, Elementary, Great Expectations). Understanding of event-driven architectures and real-time data streaming (Kafka, Event Hubs). More ❯
Key Responsibilities Lead the design, development, and maintenance of scalable, high-performance data pipelines on Databricks. Architect and implement data ingestion, transformation, and integration workflows using PySpark, SQL, and Delta Lake. Guide the team in migrating legacy ETL processes to modern cloud-based data pipelines. Ensure data accuracy, schema consistency, row counts, and KPIs during migration and transformation. Collaborate … and analytics. ________________________________________ Required Skills & Qualifications 10-12 years of experience in data engineering, with at least 3+ years in a technical lead role. Strong expertise in Databricks , PySpark , and DeltaLake . DBT Advanced proficiency in SQL, ETL/ELT pipelines, and data modelling. Experience with Azure Data Services (ADLS, ADF, Synapse) or other major cloud platforms (AWS … of data warehousing , transformation logic , SLAs, and dependencies. Hands-on experience with real-time streaming near-realtime batch is a plus., optimisation of data bricks and DBT workload and DeltaLake Familiarity with CI/CD pipelines, DevOps practices, and Git-based workflows. Knowledge of data security, encryption, and compliance frameworks (GDPR, SOC2, ISO ).good to have Excellent More ❯
scalable data solutions. Owning the end-to-end data lifecycle — from ingestion and transformation through to analytics and data product delivery. Architecting and operating pipelines using Databricks, Spark, and DeltaLake, ensuring performance, reliability, and cost-efficiency. Working closely with BI developers and analysts to deliver dashboards, extracts, datasets, and APIs that power customer insights. Shaping platform architecture … supporting their development. Skills & Experience Required Experience leading or mentoring data engineering teams within a SaaS or product-led environment. Deep hands-on knowledge of Databricks, Apache Spark, and DeltaLake, including large-scale or near real-time workloads. Strong proficiency in Python, SQL, and cloud data services (Azure preferred, but any major cloud is fine). Experience More ❯
Bournemouth, Dorset, South West, United Kingdom Hybrid / WFH Options
Sanderson Recruitment
Databricks Engineer: Key Responsibilities Build and maintain Databricks pipelines (batch and incremental) using PySpark and SQL. Orchestrate end-to-end workflows with Azure Data Factory . Develop and optimise DeltaLake tables (partitioning, schema evolution, vacuuming). Implement Medallion Architecture (Bronze, Silver, Gold) for transforming raw data into business-ready datasets. Apply robust monitoring, logging, and error-handling … Engineer: About You Strong PySpark development skills for large-scale data engineering. Proven experience with Databricks pipelines and workflow management. Expertise in Azure Data Factory orchestration. Solid knowledge of DeltaLake and Lakehouse principles. Hands-on experience with SQL for data transformation. Familiarity with Azure services (ADLS/Blob, Key Vault, SQL). Knowledge of ETL/ELT More ❯
systems that deliver real-world impact. Key Responsibilities: Lead the design, development, and optimisation of scalable machine learning workflows using Azure Databricks Build and deploy robust ML pipelines leveraging DeltaLake, MLflow, notebooks, and Databricks Jobs Apply advanced knowledge of Databricks architecture and performance tuning to support production-grade ML solutions Collaborate with data scientists, data engineers, and … learning platform, tooling, and deployment practices to accelerate delivery Experience and Qualifications Required: Deep hands-on experience with Azure Databricks, particularly in developing and deploying machine learning solutions using DeltaLake, MLflow, and Spark ML/PyTorch/TensorFlow integrations Strong programming skills in Python (including ML libraries like scikit-learn, pandas, PySpark) and experience using SQL for … model training, validation, and deployment Solid understanding of MLOps principles, including model versioning, monitoring, and CI/CD for ML workflows Familiarity with Azure cloud services, including Azure Data Lake, Azure Machine Learning, and Data Factory Experience with feature engineering, model management, and automated retraining in production environments Knowledge of data governance, security, and regulatory compliance in the context More ❯
systems, deploying LLMs, and operationalizing models in production. Key Responsibilities: Design, develop, and deploy ML, Deep Learning, and LLM solutions. Implement scalable ML and data pipelines in Databricks (PySpark, DeltaLake, MLflow). Build automated MLOps pipelines with model tracking, CI/CD, and registry. Deploy and operationalize LLMs , including fine-tuning, prompt optimization, and monitoring. Architect secure … Mentor engineers, enforce best practices, and lead design/architecture reviews. Required Skills & Experience: 5+ years in ML/AI solution development. Recent hands-on experience with Databricks, PySpark, DeltaLake, MLflow . Experience with LLMs (Hugging Face, LangChain, Azure OpenAI) . Strong MLOps, CI/CD, and model monitoring experience. Proficiency in Python, PyTorch/TensorFlow, FastAPI More ❯
a seasoned engineer who can operate across architecture, delivery, and consulting. Key responsibilities include: Design and build end-to-end data solutions on Databricks, using Spark, Python, SQL, and DeltaLake Apply software engineering best practices — TDD, CI/CD, version control, automation, and clean coding principles Work across the entire software development lifecycle, from design to deployment … code, and driving best practices Collaborate with data scientists, architects, and business teams to deliver production-grade outcomes Essential skills needed: Deep hands-on experience with Databricks (SQL, PySpark, DeltaLake, Unity Catalog, Workflows) Strong proficiency in Python and Spark Solid understanding of CI/CD pipelines, DevOps, and Infrastructure as Code Proven track record designing and delivering More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Primus
a seasoned engineer who can operate across architecture, delivery, and consulting. Key responsibilities include: Design and build end-to-end data solutions on Databricks, using Spark, Python, SQL, and DeltaLake Apply software engineering best practices — TDD, CI/CD, version control, automation, and clean coding principles Work across the entire software development lifecycle, from design to deployment … code, and driving best practices Collaborate with data scientists, architects, and business teams to deliver production-grade outcomes Essential skills needed: Deep hands-on experience with Databricks (SQL, PySpark, DeltaLake, Unity Catalog, Workflows) Strong proficiency in Python and Spark Solid understanding of CI/CD pipelines, DevOps, and Infrastructure as Code Proven track record designing and delivering More ❯
Data Engineer/Data Engineering/Data Consultant/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area, Permanent role, £50,000 - 70,000 + car/allowance (£5,000) + 15% bonus. One of our leading clients … + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
MYO Talent
Data Engineer/Data Engineering/Data Consultant/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area, Permanent role, £50,000 - 70,000 + car/allowance (£5,000) + 15% bonus. One of our leading clients … + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
to capture Cloud + Databricks needs. Define security, compliance, downtime tolerance, RPO/RTO, SLAs, and cost requirements. Capture data platform requirements across ingestion, transformation, governance, and analytics (Databricks, DeltaLake, Unity Catalog, Workflows). Map service and data dependencies, classify criticality, and align to the Core Cloud capability catalogue. Produce a clear, endorsed baseline of Core Cloud … BA within cloud or data platform programmes (Azure + Databricks ideal). Experience working with AWS tech stack Strong experience gathering technical, data, and platform requirements. Understanding of Databricks (DeltaLake, Unity Catalog, governance, clusters, pipelines). Comfortable engaging technical and non-technical stakeholders; strong documentation skills. Nice to Have: Data platform migration experience; exposure to FinOps; agile More ❯
Greater Manchester, England, United Kingdom Hybrid / WFH Options
Searchability®
using Databricks . Strong understanding of Apache Spark (PySpark or Scala) and Structured Streaming . Experience working with Kafka (MSK) and handling real-time data . Good knowledge of DeltaLake/Delta Live Tables and the Medallion architecture . Hands-on experience with AWS services such as S3, Glue, Lambda, Batch, and IAM. Strong skills in More ❯
London, England, United Kingdom Hybrid / WFH Options
Harnham
mindset with a passion for modern data and platform technologies. Nice to Have: Experience implementing data governance and observability stacks (lineage, data contracts, quality monitoring). Knowledge of data lake formats (DeltaLake, Parquet, Iceberg, Hudi). Familiarity with containerisation and streaming technologies (Docker, Kubernetes, Kafka, Flink). Exposure to lakehouse or medallion architectures within Databricks. More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
mindset with a passion for modern data and platform technologies. Nice to Have: Experience implementing data governance and observability stacks (lineage, data contracts, quality monitoring). Knowledge of data lake formats (DeltaLake, Parquet, Iceberg, Hudi). Familiarity with containerisation and streaming technologies (Docker, Kubernetes, Kafka, Flink). Exposure to lakehouse or medallion architectures within Databricks. More ❯
data into trusted, actionable insights that power critical business decisions. Key Responsibilities Design and implement scalable data pipelines and ETL/ELT workflows in Databricks using PySpark, SQL, and Delta Lake. Architect and manage the Medallion (Bronze, Silver, Gold) data architecture for optimal data organization, transformation, and consumption. Develop and maintain data models, schemas, and data quality frameworks across … emerging technologies in cloud data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, DeltaLake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design … and data lakehouse concepts. Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks Unity Catalog More ❯
data into trusted, actionable insights that power critical business decisions. Key Responsibilities Design and implement scalable data pipelines and ETL/ELT workflows in Databricks using PySpark, SQL, and Delta Lake. Architect and manage the Medallion (Bronze, Silver, Gold) data architecture for optimal data organization, transformation, and consumption. Develop and maintain data models, schemas, and data quality frameworks across … emerging technologies in cloud data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, DeltaLake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design … and data lakehouse concepts. Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks Unity Catalog More ❯
data into trusted, actionable insights that power critical business decisions. Key Responsibilities Design and implement scalable data pipelines and ETL/ELT workflows in Databricks using PySpark, SQL, and Delta Lake. Architect and manage the Medallion (Bronze, Silver, Gold) data architecture for optimal data organization, transformation, and consumption. Develop and maintain data models, schemas, and data quality frameworks across … emerging technologies in cloud data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, DeltaLake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design … and data lakehouse concepts. Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks Unity Catalog More ❯
Strong experience with Azure data services (Data Factory, Synapse, Blob Storage, etc.). Proficiency in SQL for data manipulation, transformation, and performance optimisation. Hands-on experience with Databricks (Spark, DeltaLake, notebooks). Solid understanding of data architecture principles and cloud-native design. Experience working in consultancy or client-facing roles is highly desirable. Familiarity with CI/ More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
Strong experience with Azure data services (Data Factory, Synapse, Blob Storage, etc.). Proficiency in SQL for data manipulation, transformation, and performance optimisation. Hands-on experience with Databricks (Spark, DeltaLake, notebooks). Solid understanding of data architecture principles and cloud-native design. Experience working in consultancy or client-facing roles is highly desirable. Familiarity with CI/ More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
and documentation to improve data engineering processes. Mentor junior engineers and support knowledge-sharing across teams. Key Responsibilities: Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake. Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data. Implement data governance, security, and compliance standards. Work with cloud platforms such as …/CD processes for data pipeline deployment and monitoring. What We Are Looking For: 5+ years of experience in data engineering or related roles. Strong expertise in Databricks, Spark, DeltaLake, and cloud data platforms (AWS, Azure, or GCP). Proficiency in Python and SQL for data manipulation and transformation. Experience with ETL/ELT development and orchestration More ❯
and documentation to improve data engineering processes. Mentor junior engineers and support knowledge-sharing across teams. Key Responsibilities: Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake. Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data. Implement data governance, security, and compliance standards. Work with cloud platforms such as …/CD processes for data pipeline deployment and monitoring. What We Are Looking For: 5+ years of experience in data engineering or related roles. Strong expertise in Databricks, Spark, DeltaLake, and cloud data platforms (AWS, Azure, or GCP). Proficiency in Python and SQL for data manipulation and transformation. Experience with ETL/ELT development and orchestration More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Involved Solutions
methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and Apache Spark, including performance tuning and advanced concepts such as DeltaLake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing and CI/CD pipelines Advanced knowledge of SQL and More ❯
making Constantly improving data architecture and processes to support innovation at scale What We’re Looking For Strong hands-on experience with Azure Databricks, Data Factory, Blob Storage, and DeltaLake Proficiency in Python, PySpark, and SQL Deep understanding of ETL/ELT, CDC, streaming data, and lakehouse architecture Proven ability to optimise data systems for performance, scalability More ❯