decision making Desirable Skills: • Experience in financial services or data-heavy environments • Knowledge of reconciliation or financial data processing systems • Exposure to containerised environments (Docker/Kubernetes) • Experience with DeltaLake and Microsoft Fabric Tech Stack SQL | Python | PowerShell | .NET | C# Azure Data Lake | Azure Blob Storage | DeltaLake Performance Testing: JMeter, k6 Monitoring: Azure More ❯
Bournemouth, Dorset, South West, United Kingdom Hybrid/Remote Options
Sanderson Recruitment
Databricks Engineer: Key Responsibilities Build and maintain Databricks pipelines (batch and incremental) using PySpark and SQL. Orchestrate end-to-end workflows with Azure Data Factory . Develop and optimise DeltaLake tables (partitioning, schema evolution, vacuuming). Implement Medallion Architecture (Bronze, Silver, Gold) for transforming raw data into business-ready datasets. Apply robust monitoring, logging, and error-handling … Engineer: About You Strong PySpark development skills for large-scale data engineering. Proven experience with Databricks pipelines and workflow management. Expertise in Azure Data Factory orchestration. Solid knowledge of DeltaLake and Lakehouse principles. Hands-on experience with SQL for data transformation. Familiarity with Azure services (ADLS/Blob, Key Vault, SQL). Knowledge of ETL/ELT More ❯
scalable data solutions. Owning the end-to-end data lifecycle — from ingestion and transformation through to analytics and data product delivery. Architecting and operating pipelines using Databricks, Spark, and DeltaLake, ensuring performance, reliability, and cost-efficiency. Working closely with BI developers and analysts to deliver dashboards, extracts, datasets, and APIs that power customer insights. Shaping platform architecture … supporting their development. Skills & Experience Required Experience leading or mentoring data engineering teams within a SaaS or product-led environment. Deep hands-on knowledge of Databricks, Apache Spark, and DeltaLake, including large-scale or near real-time workloads. Strong proficiency in Python, SQL, and cloud data services (Azure preferred, but any major cloud is fine). Experience More ❯
systems that deliver real-world impact. Key Responsibilities: Lead the design, development, and optimisation of scalable machine learning workflows using Azure Databricks Build and deploy robust ML pipelines leveraging DeltaLake, MLflow, notebooks, and Databricks Jobs Apply advanced knowledge of Databricks architecture and performance tuning to support production-grade ML solutions Collaborate with data scientists, data engineers, and … learning platform, tooling, and deployment practices to accelerate delivery Experience and Qualifications Required: Deep hands-on experience with Azure Databricks, particularly in developing and deploying machine learning solutions using DeltaLake, MLflow, and Spark ML/PyTorch/TensorFlow integrations Strong programming skills in Python (including ML libraries like scikit-learn, pandas, PySpark) and experience using SQL for … model training, validation, and deployment Solid understanding of MLOps principles, including model versioning, monitoring, and CI/CD for ML workflows Familiarity with Azure cloud services, including Azure Data Lake, Azure Machine Learning, and Data Factory Experience with feature engineering, model management, and automated retraining in production environments Knowledge of data governance, security, and regulatory compliance in the context More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Omnis Partners
such as financial services, pharmaceuticals, energy, retail, healthcare, and manufacturing. The Role: Data Engineer (Databricks) We are seeking an experienced Data Engineer with strong expertise in Databricks , Apache Spark, DeltaLake, Python, and SQL to take a lead role in delivering innovative data projects. You will design and build scalable, cloud-based data pipelines on platforms such as … teams, you’ll translate business requirements into powerful, production-grade data solutions. Key Responsibilities: Design, build, and optimise large-scale data pipelines using Databricks and Spark. Implement and maintain DeltaLake architectures and data governance best practices. Deliver end-to-end solutions across cloud platforms (AWS, Azure, or GCP). Provide technical leadership and mentor junior engineers within … practices including CI/CD and automated testing. What You Bring: Proven experience as a Data Engineer working in cloud environments. Expert-level knowledge of Databricks, Apache Spark, and Delta Lake. Advanced Python and SQL programming skills. Strong understanding of CI/CD pipelines, automated testing, and data governance. Excellent communication and stakeholder engagement skills. What’s on Offer More ❯
such as financial services, pharmaceuticals, energy, retail, healthcare, and manufacturing. The Role: Data Engineer (Databricks) We are seeking an experienced Data Engineer with strong expertise in Databricks , Apache Spark, DeltaLake, Python, and SQL to take a lead role in delivering innovative data projects. You will design and build scalable, cloud-based data pipelines on platforms such as … teams, you’ll translate business requirements into powerful, production-grade data solutions. Key Responsibilities: Design, build, and optimise large-scale data pipelines using Databricks and Spark. Implement and maintain DeltaLake architectures and data governance best practices. Deliver end-to-end solutions across cloud platforms (AWS, Azure, or GCP). Provide technical leadership and mentor junior engineers within … practices including CI/CD and automated testing. What You Bring: Proven experience as a Data Engineer working in cloud environments. Expert-level knowledge of Databricks, Apache Spark, and Delta Lake. Advanced Python and SQL programming skills. Strong understanding of CI/CD pipelines, automated testing, and data governance. Excellent communication and stakeholder engagement skills. What’s on Offer More ❯
Data Engineer/Data Engineering/Data Consultant/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area, Permanent role, £50,000 - 70,000 + car/allowance (£5,000) + 15% bonus. One of our leading clients … + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
Greater Manchester, England, United Kingdom Hybrid/Remote Options
Searchability®
using Databricks . Strong understanding of Apache Spark (PySpark or Scala) and Structured Streaming . Experience working with Kafka (MSK) and handling real-time data . Good knowledge of DeltaLake/Delta Live Tables and the Medallion architecture . Hands-on experience with AWS services such as S3, Glue, Lambda, Batch, and IAM. Strong skills in More ❯
data into trusted, actionable insights that power critical business decisions. Key Responsibilities Design and implement scalable data pipelines and ETL/ELT workflows in Databricks using PySpark, SQL, and Delta Lake. Architect and manage the Medallion (Bronze, Silver, Gold) data architecture for optimal data organization, transformation, and consumption. Develop and maintain data models, schemas, and data quality frameworks across … emerging technologies in cloud data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, DeltaLake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design … and data lakehouse concepts. Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks Unity Catalog More ❯
you love solving complex data challenges and building scalable solutions, this is your chance to make an impact. What You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards ETL & Data Modelling: T-SQL, metadata-driven pipelines DevOps: CI/CD Bonus: Python What you'll do More ❯
you love solving complex data challenges and building scalable solutions, this is your chance to make an impact. What You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards ETL & Data Modelling: T-SQL, metadata-driven pipelines DevOps: CI/CD Bonus: Python What you'll do More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
and documentation to improve data engineering processes. Mentor junior engineers and support knowledge-sharing across teams. Key Responsibilities: Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake. Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data. Implement data governance, security, and compliance standards. Work with cloud platforms such as …/CD processes for data pipeline deployment and monitoring. What We Are Looking For: 5+ years of experience in data engineering or related roles. Strong expertise in Databricks, Spark, DeltaLake, and cloud data platforms (AWS, Azure, or GCP). Proficiency in Python and SQL for data manipulation and transformation. Experience with ETL/ELT development and orchestration More ❯
and documentation to improve data engineering processes. Mentor junior engineers and support knowledge-sharing across teams. Key Responsibilities: Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake. Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data. Implement data governance, security, and compliance standards. Work with cloud platforms such as …/CD processes for data pipeline deployment and monitoring. What We Are Looking For: 5+ years of experience in data engineering or related roles. Strong expertise in Databricks, Spark, DeltaLake, and cloud data platforms (AWS, Azure, or GCP). Proficiency in Python and SQL for data manipulation and transformation. Experience with ETL/ELT development and orchestration More ❯
in library-oriented development, creating reusable functions and modules. o Strong SQL skills for data querying and transformation. o Experience with Azure cloud services. o Experience with Databricks, including DeltaLake, Spark SQL, and Unity Catalog. Ability to optimize Spark jobs for performance and scalability. o Experience with Azure DevOps for CI/CD. o Knowledge of data More ❯
making Constantly improving data architecture and processes to support innovation at scale What We’re Looking For Strong hands-on experience with Azure Databricks, Data Factory, Blob Storage, and DeltaLake Proficiency in Python, PySpark, and SQL Deep understanding of ETL/ELT, CDC, streaming data, and lakehouse architecture Proven ability to optimise data systems for performance, scalability More ❯
Advanced proficiency in Power BI, including DAX, Power Query (M), and data modelling. Deep understanding of data warehousing, ETL, and data lakehouse concepts. Strong working knowledge of Databricks, including DeltaLake and notebooks. Strong interpersonal skills with the ability to influence and communicate complex data topics clearly. Excellent analytical, organisational, and problem-solving abilities. Experience leading or mentoring More ❯
Advanced proficiency in Power BI, including DAX, Power Query (M), and data modelling. Deep understanding of data warehousing, ETL, and data lakehouse concepts. Strong working knowledge of Databricks, including DeltaLake and notebooks. Strong interpersonal skills with the ability to influence and communicate complex data topics clearly. Excellent analytical, organisational, and problem-solving abilities. Experience leading or mentoring More ❯
your skills in analytics engineering, responding to business and project needs rather than operating as a narrow silo. You'll work hands-on with Azure Databricks, Azure Data Factory, DeltaLake, and Power BI to create scalable data models, automated pipelines, and self-service analytics capabilities. This is a fantastic opportunity to join a newly created team, work More ❯
your skills in analytics engineering, responding to business and project needs rather than operating as a narrow silo. You'll work hands-on with Azure Databricks, Azure Data Factory, DeltaLake, and Power BI to create scalable data models, automated pipelines, and self-service analytics capabilities. This is a fantastic opportunity to join a newly created team, work More ❯
Glasgow, Scotland, United Kingdom Hybrid/Remote Options
Square One Resources
development experience (mandatory) Strong SQL, Python, and PySpark Experience with GitLab and unit testing Knowledge of modern data engineering patterns and best practices Desirable (Databricks Track) Apache Spark Databricks (DeltaLake, Unity Catalog, MLflow) Experience with Databricks migration or development AI/ML understanding (nice to have More ❯