Key Responsibilities Lead the design, development, and maintenance of scalable, high-performance data pipelines on Databricks. Architect and implement data ingestion, transformation, and integration workflows using PySpark, SQL, and Delta Lake. Guide the team in migrating legacy ETL processes to modern cloud-based data pipelines. Ensure data accuracy, schema consistency, row counts, and KPIs during migration and transformation. Collaborate … and analytics. ________________________________________ Required Skills & Qualifications 10-12 years of experience in data engineering, with at least 3+ years in a technical lead role. Strong expertise in Databricks , PySpark , and DeltaLake . DBT Advanced proficiency in SQL, ETL/ELT pipelines, and data modelling. Experience with Azure Data Services (ADLS, ADF, Synapse) or other major cloud platforms (AWS … of data warehousing , transformation logic , SLAs, and dependencies. Hands-on experience with real-time streaming near-realtime batch is a plus., optimisation of data bricks and DBT workload and DeltaLake Familiarity with CI/CD pipelines, DevOps practices, and Git-based workflows. Knowledge of data security, encryption, and compliance frameworks (GDPR, SOC2, ISO ).good to have Excellent More ❯
Bournemouth, Dorset, South West, United Kingdom Hybrid/Remote Options
Sanderson Recruitment
Databricks Engineer: Key Responsibilities Build and maintain Databricks pipelines (batch and incremental) using PySpark and SQL. Orchestrate end-to-end workflows with Azure Data Factory . Develop and optimise DeltaLake tables (partitioning, schema evolution, vacuuming). Implement Medallion Architecture (Bronze, Silver, Gold) for transforming raw data into business-ready datasets. Apply robust monitoring, logging, and error-handling … Engineer: About You Strong PySpark development skills for large-scale data engineering. Proven experience with Databricks pipelines and workflow management. Expertise in Azure Data Factory orchestration. Solid knowledge of DeltaLake and Lakehouse principles. Hands-on experience with SQL for data transformation. Familiarity with Azure services (ADLS/Blob, Key Vault, SQL). Knowledge of ETL/ELT More ❯
scalable data solutions. Owning the end-to-end data lifecycle — from ingestion and transformation through to analytics and data product delivery. Architecting and operating pipelines using Databricks, Spark, and DeltaLake, ensuring performance, reliability, and cost-efficiency. Working closely with BI developers and analysts to deliver dashboards, extracts, datasets, and APIs that power customer insights. Shaping platform architecture … supporting their development. Skills & Experience Required Experience leading or mentoring data engineering teams within a SaaS or product-led environment. Deep hands-on knowledge of Databricks, Apache Spark, and DeltaLake, including large-scale or near real-time workloads. Strong proficiency in Python, SQL, and cloud data services (Azure preferred, but any major cloud is fine). Experience More ❯
systems that deliver real-world impact. Key Responsibilities: Lead the design, development, and optimisation of scalable machine learning workflows using Azure Databricks Build and deploy robust ML pipelines leveraging DeltaLake, MLflow, notebooks, and Databricks Jobs Apply advanced knowledge of Databricks architecture and performance tuning to support production-grade ML solutions Collaborate with data scientists, data engineers, and … learning platform, tooling, and deployment practices to accelerate delivery Experience and Qualifications Required: Deep hands-on experience with Azure Databricks, particularly in developing and deploying machine learning solutions using DeltaLake, MLflow, and Spark ML/PyTorch/TensorFlow integrations Strong programming skills in Python (including ML libraries like scikit-learn, pandas, PySpark) and experience using SQL for … model training, validation, and deployment Solid understanding of MLOps principles, including model versioning, monitoring, and CI/CD for ML workflows Familiarity with Azure cloud services, including Azure Data Lake, Azure Machine Learning, and Data Factory Experience with feature engineering, model management, and automated retraining in production environments Knowledge of data governance, security, and regulatory compliance in the context More ❯
such as financial services, pharmaceuticals, energy, retail, healthcare, and manufacturing. The Role: Data Engineer (Databricks) We are seeking an experienced Data Engineer with strong expertise in Databricks , Apache Spark, DeltaLake, Python, and SQL to take a lead role in delivering innovative data projects. You will design and build scalable, cloud-based data pipelines on platforms such as … teams, you’ll translate business requirements into powerful, production-grade data solutions. Key Responsibilities: Design, build, and optimise large-scale data pipelines using Databricks and Spark. Implement and maintain DeltaLake architectures and data governance best practices. Deliver end-to-end solutions across cloud platforms (AWS, Azure, or GCP). Provide technical leadership and mentor junior engineers within … practices including CI/CD and automated testing. What You Bring: Proven experience as a Data Engineer working in cloud environments. Expert-level knowledge of Databricks, Apache Spark, and Delta Lake. Advanced Python and SQL programming skills. Strong understanding of CI/CD pipelines, automated testing, and data governance. Excellent communication and stakeholder engagement skills. What’s on Offer More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Omnis Partners
such as financial services, pharmaceuticals, energy, retail, healthcare, and manufacturing. The Role: Data Engineer (Databricks) We are seeking an experienced Data Engineer with strong expertise in Databricks , Apache Spark, DeltaLake, Python, and SQL to take a lead role in delivering innovative data projects. You will design and build scalable, cloud-based data pipelines on platforms such as … teams, you’ll translate business requirements into powerful, production-grade data solutions. Key Responsibilities: Design, build, and optimise large-scale data pipelines using Databricks and Spark. Implement and maintain DeltaLake architectures and data governance best practices. Deliver end-to-end solutions across cloud platforms (AWS, Azure, or GCP). Provide technical leadership and mentor junior engineers within … practices including CI/CD and automated testing. What You Bring: Proven experience as a Data Engineer working in cloud environments. Expert-level knowledge of Databricks, Apache Spark, and Delta Lake. Advanced Python and SQL programming skills. Strong understanding of CI/CD pipelines, automated testing, and data governance. Excellent communication and stakeholder engagement skills. What’s on Offer More ❯
systems, deploying LLMs, and operationalizing models in production. Key Responsibilities: Design, develop, and deploy ML, Deep Learning, and LLM solutions. Implement scalable ML and data pipelines in Databricks (PySpark, DeltaLake, MLflow). Build automated MLOps pipelines with model tracking, CI/CD, and registry. Deploy and operationalize LLMs , including fine-tuning, prompt optimization, and monitoring. Architect secure … Mentor engineers, enforce best practices, and lead design/architecture reviews. Required Skills & Experience: 5+ years in ML/AI solution development. Recent hands-on experience with Databricks, PySpark, DeltaLake, MLflow . Experience with LLMs (Hugging Face, LangChain, Azure OpenAI) . Strong MLOps, CI/CD, and model monitoring experience. Proficiency in Python, PyTorch/TensorFlow, FastAPI More ❯
Data Engineer/Data Engineering/Data Consultant/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area, Permanent role, £ + car/allowance (£5,000) + 15% bonus. One of our leading clients is looking to recruit … role Salary £ + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Primus
a seasoned engineer who can operate across architecture, delivery, and consulting. Key responsibilities include: Design and build end-to-end data solutions on Databricks, using Spark, Python, SQL, and DeltaLake Apply software engineering best practices — TDD, CI/CD, version control, automation, and clean coding principles Work across the entire software development lifecycle, from design to deployment … code, and driving best practices Collaborate with data scientists, architects, and business teams to deliver production-grade outcomes Essential skills needed: Deep hands-on experience with Databricks (SQL, PySpark, DeltaLake, Unity Catalog, Workflows) Strong proficiency in Python and Spark Solid understanding of CI/CD pipelines, DevOps, and Infrastructure as Code Proven track record designing and delivering More ❯
a seasoned engineer who can operate across architecture, delivery, and consulting. Key responsibilities include: Design and build end-to-end data solutions on Databricks, using Spark, Python, SQL, and DeltaLake Apply software engineering best practices — TDD, CI/CD, version control, automation, and clean coding principles Work across the entire software development lifecycle, from design to deployment … code, and driving best practices Collaborate with data scientists, architects, and business teams to deliver production-grade outcomes Essential skills needed: Deep hands-on experience with Databricks (SQL, PySpark, DeltaLake, Unity Catalog, Workflows) Strong proficiency in Python and Spark Solid understanding of CI/CD pipelines, DevOps, and Infrastructure as Code Proven track record designing and delivering More ❯
to capture Cloud + Databricks needs. Define security, compliance, downtime tolerance, RPO/RTO, SLAs, and cost requirements. Capture data platform requirements across ingestion, transformation, governance, and analytics (Databricks, DeltaLake, Unity Catalog, Workflows). Map service and data dependencies, classify criticality, and align to the Core Cloud capability catalogue. Produce a clear, endorsed baseline of Core Cloud … BA within cloud or data platform programmes (Azure + Databricks ideal). Experience working with AWS tech stack Strong experience gathering technical, data, and platform requirements. Understanding of Databricks (DeltaLake, Unity Catalog, governance, clusters, pipelines). Comfortable engaging technical and non-technical stakeholders; strong documentation skills. Nice to Have: Data platform migration experience; exposure to FinOps; agile More ❯
Greater Manchester, England, United Kingdom Hybrid/Remote Options
Searchability®
using Databricks . Strong understanding of Apache Spark (PySpark or Scala) and Structured Streaming . Experience working with Kafka (MSK) and handling real-time data . Good knowledge of DeltaLake/Delta Live Tables and the Medallion architecture . Hands-on experience with AWS services such as S3, Glue, Lambda, Batch, and IAM. Strong skills in More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Harnham - Data & Analytics Recruitment
mindset with a passion for modern data and platform technologies. Nice to Have: Experience implementing data governance and observability stacks (lineage, data contracts, quality monitoring). Knowledge of data lake formats (DeltaLake, Parquet, Iceberg, Hudi). Familiarity with containerisation and streaming technologies (Docker, Kubernetes, Kafka, Flink). Exposure to lakehouse or medallion architectures within Databricks. More ❯
data into trusted, actionable insights that power critical business decisions. Key Responsibilities Design and implement scalable data pipelines and ETL/ELT workflows in Databricks using PySpark, SQL, and Delta Lake. Architect and manage the Medallion (Bronze, Silver, Gold) data architecture for optimal data organization, transformation, and consumption. Develop and maintain data models, schemas, and data quality frameworks across … emerging technologies in cloud data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, DeltaLake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design … and data lakehouse concepts. Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks Unity Catalog More ❯
and documentation to improve data engineering processes. Mentor junior engineers and support knowledge-sharing across teams. Key Responsibilities: Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake. Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data. Implement data governance, security, and compliance standards. Work with cloud platforms such as …/CD processes for data pipeline deployment and monitoring. What We Are Looking For: 5+ years of experience in data engineering or related roles. Strong expertise in Databricks, Spark, DeltaLake, and cloud data platforms (AWS, Azure, or GCP). Proficiency in Python and SQL for data manipulation and transformation. Experience with ETL/ELT development and orchestration More ❯
and documentation to improve data engineering processes. Mentor junior engineers and support knowledge-sharing across teams. Key Responsibilities: Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake. Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data. Implement data governance, security, and compliance standards. Work with cloud platforms such as …/CD processes for data pipeline deployment and monitoring. What We Are Looking For: 5+ years of experience in data engineering or related roles. Strong expertise in Databricks, Spark, DeltaLake, and cloud data platforms (AWS, Azure, or GCP). Proficiency in Python and SQL for data manipulation and transformation. Experience with ETL/ELT development and orchestration More ❯
data engineering, BI, analytics, and AI/ML teams to design robust, reusable, and production-grade data pipelines and model deployment frameworks. Champion the adoption of Databricks capabilities including DeltaLake, Unity Catalog, and MLflow, ensuring alignment with enterprise AI strategy. Lead the migration of legacy ETL and data processing workflows to modern, Databricks-native architectures that support … a proven ability to design, optimise, and scale high-performance data pipelines for AI and analytics applications. Deep understanding of cloud-native architecture within the Azure ecosystem — including Data Lake, Data Factory, and supporting services — to build resilient and scalable AI data platforms. Skilled in data modelling and solution design, applying dimensional modelling principles (e.g., Kimball methodology) to support More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and Apache Spark, including performance tuning and advanced concepts such as DeltaLake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing and CI/CD pipelines Advanced knowledge of SQL and More ❯
making Constantly improving data architecture and processes to support innovation at scale What We’re Looking For Strong hands-on experience with Azure Databricks, Data Factory, Blob Storage, and DeltaLake Proficiency in Python, PySpark, and SQL Deep understanding of ETL/ELT, CDC, streaming data, and lakehouse architecture Proven ability to optimise data systems for performance, scalability More ❯
at the forefront of emerging technologies. Nice to Have Experience rolling out data governance and observability frameworks, including lineage tracking, SLAs, and data quality monitoring. Familiarity with modern data lake table formats such as DeltaLake, Iceberg, or Hudi. Background in stream processing (Kafka, Flink, or similar ecosystems). Exposure to containerisation and orchestration technologies such as More ❯
with Databricks (including notebooks, clusters, and job orchestration) Strong knowledge of Apache Spark , PySpark , and distributed data processing Experience building and optimising ETL pipelines and data workflows Familiarity with DeltaLake , SQL , and data modelling best practices Ability to work with large, complex datasets from multiple sources Comfortable working independently in a fully remote environment Strong understanding of … detail and structured approach to analysis Ability to interpret business needs and translate them into scalable data solutions Desirable Experience with Azure Databricks and related cloud services (Azure Data Lake, Data Factory) Knowledge of data governance , security , and compliance in government environments Familiarity with CI/CD pipelines for data engineering This is an excellent opportunity for experienced Databricks More ❯
using Azure Data Factory, Databricks (Python/PySpark), and advanced SQL. Productionise Databricks: Lead the development of robust, scalable solutions on Databricks. This is role focused on production code, DeltaLake, Structured Streaming, and Spark performance tuning—not just ad-hoc notebooks. Champion DataOps & CI/CD: Implement and manage CI/CD processes for data pipelines using More ❯
Advanced proficiency in Power BI, including DAX, Power Query (M), and data modelling. Deep understanding of data warehousing, ETL, and data lakehouse concepts. Strong working knowledge of Databricks, including DeltaLake and notebooks. Strong interpersonal skills with the ability to influence and communicate complex data topics clearly. Excellent analytical, organisational, and problem-solving abilities. Experience leading or mentoring More ❯