and optimise Spark-based data pipelines for batch and streaming workloads Develop Fabric dataflows, pipelines, and semantic models Implement complex transformations, joins, aggregations and performance tuning Build and optimise DeltaLake/delta tables Develop secure data solutions including role-based access, data masking and compliance controls Implement data validation, cleansing, profiling and documentation Work closely with … on experience with PySpark, Spark SQL, Spark Streaming, DataFrames Microsoft Fabric (Fabric Spark jobs, dataflows, pipelines, semantic models) Azure: ADLS, cloud data engineering, notebooks Python programming; Java exposure beneficial DeltaLake/Delta table optimisation experience Git/GitLab, CI/CD pipelines, DevOps practices Strong troubleshooting and problem-solving ability Experience with lakehouse architectures, ETL workflows … and distributed computing Familiarity with time-series, market data, transactional data or risk metrics Nice to Have Power BI dataset preparation OneLake, Azure Data Lake, Kubernetes, Docker Knowledge of financial regulations (GDPR, SOX) Details Location: London (office-based) Type: Contract Duration: 6 months Start: ASAP Rate: Market rates If you are a PySpark/Fabric/Azure Data Engineer More ❯
Senior Azure Data Engineer (Databricks, Delta Live Tables, Unity Catalog) 3 - 6 months Remote (UK Candidates Only) £550 - £600 per day Outside IR35 We are currently recruiting for an experienced Data Engineer skilled in Microsoft Azure and cloud computing concepts. As the Azure Data Engineer, you will work closely with a Microsoft & Databricks partner with responsibilities of end-to … sales, technical architecture or consulting role Experience working on Big Data Architectures independently Comfortable writing code in Python Experience working across Azure including Azure Data Factory, Azure Synapse, Azure DeltaLake Storage, DeltaLake etc Experience with Purview, Unity Catalog etc Experience with streaming Data in Kakfa/Event Hubs/Stream Analytics etc Experience working More ❯
Central London, London, United Kingdom Hybrid/Remote Options
McCabe & Barton
/ELT processes to transform raw data into structured, analytics-ready formats. Optimise pipeline performance and ensure high availability of data services. Infrastructure & Architecture Architect and deploy scalable data lake solutions using Azure Data Lake Storage . Implement governance and security measures across the platform. Leverage Terraform or similar IaC tools for controlled and reproducible deployments. Databricks Development … Develop and optimise data jobs using PySpark or Scala within Databricks. Implement the medallion architecture (bronze, silver, gold layers) and use DeltaLake for reliable data transactions. Manage cluster configurations and CI/CD pipelines for Databricks deployments. Monitoring & Operations Implement monitoring solutions using Azure Monitor , Log Analytics , and Databricks tools. Optimise performance, ensure SLAs are met, and … for knowledge sharing. Essential Skills & Experience 5+ years of experience with Azure services (Azure Data Factory, ADLS, Azure SQL Database, Synapse Analytics). Strong hands-on expertise in Databricks , DeltaLake , and cluster management . Proficiency in SQL and Python for pipeline development. Familiarity with Git/GitHub and CI/CD practices. Understanding of data modelling , data More ❯
Bournemouth, Dorset, South West, United Kingdom Hybrid/Remote Options
Sanderson Recruitment
Databricks Engineer: Key Responsibilities Build and maintain Databricks pipelines (batch and incremental) using PySpark and SQL. Orchestrate end-to-end workflows with Azure Data Factory . Develop and optimise DeltaLake tables (partitioning, schema evolution, vacuuming). Implement Medallion Architecture (Bronze, Silver, Gold) for transforming raw data into business-ready datasets. Apply robust monitoring, logging, and error-handling … Engineer: About You Strong PySpark development skills for large-scale data engineering. Proven experience with Databricks pipelines and workflow management. Expertise in Azure Data Factory orchestration. Solid knowledge of DeltaLake and Lakehouse principles. Hands-on experience with SQL for data transformation. Familiarity with Azure services (ADLS/Blob, Key Vault, SQL). Knowledge of ETL/ELT More ❯
governance across modern cloud environments. Key Responsibilities Design, build, and maintain scalable data pipelines using Databricks Notebooks, Jobs, and Workflows for both batch and streaming data. Optimise Spark and DeltaLake performance through efficient cluster configuration, adaptive query execution, and caching strategies. Conduct performance testing and cluster tuning to ensure cost-efficient, high-performing workloads. Implement data quality … control policies aligned with Databricks Unity Catalogue and governance best practices. Develop PySpark applications for ETL, data transformation, and analytics, following modular and reusable design principles. Create and manage DeltaLake tables with ACID compliance, schema evolution, and time travel for versioned data management. Integrate Databricks solutions with Azure services such as Azure Data Lake Storage, Key … Vault, and Azure Functions. What We're Looking For Proven experience with Databricks, PySpark, and Delta Lake. Strong understanding of workflow orchestration, performance optimisation, and data governance. Hands-on experience with Azure cloud services. Ability to work in a fast-paced environment and deliver high-quality solutions. SC Cleared candidates If you're interested in this role, click 'apply More ❯
data governance across modern cloud environments.Key Responsibilities Design, build, and maintain scalable data pipelines using Databricks Notebooks, Jobs, and Workflows for both batch and streaming data. Optimise Spark and DeltaLake performance through efficient cluster configuration, adaptive query execution, and caching strategies. Conduct performance testing and cluster tuning to ensure cost-efficient, high-performing workloads. Implement data quality … control policies aligned with Databricks Unity Catalogue and governance best practices. Develop PySpark applications for ETL, data transformation, and analytics, following modular and reusable design principles. Create and manage DeltaLake tables with ACID compliance, schema evolution, and time travel for versioned data management. Integrate Databricks solutions with Azure services such as Azure Data Lake Storage, Key … Vault, and Azure Functions. What We're Looking For Proven experience with Databricks, PySpark, and Delta Lake. Strong understanding of workflow orchestration, performance optimisation, and data governance. Hands-on experience with Azure cloud services. Ability to work in a fast-paced environment and deliver high-quality solutions. SC Cleared candidates If you're interested in this role, click 'apply More ❯
Data Engineer/Data Engineering/Data Consultant/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area, Permanent role, £ + car/allowance (£5,000) + 15% bonus. One of our leading clients is looking to recruit … role Salary £ + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
systems, deploying LLMs, and operationalizing models in production. Key Responsibilities: Design, develop, and deploy ML, Deep Learning, and LLM solutions. Implement scalable ML and data pipelines in Databricks (PySpark, DeltaLake, MLflow). Build automated MLOps pipelines with model tracking, CI/CD, and registry. Deploy and operationalize LLMs , including fine-tuning, prompt optimization, and monitoring. Architect secure … Mentor engineers, enforce best practices, and lead design/architecture reviews. Required Skills & Experience: 5+ years in ML/AI solution development. Recent hands-on experience with Databricks, PySpark, DeltaLake, MLflow . Experience with LLMs (Hugging Face, LangChain, Azure OpenAI) . Strong MLOps, CI/CD, and model monitoring experience. Proficiency in Python, PyTorch/TensorFlow, FastAPI More ❯
About the RoleWe are looking for a Python Data Engineer with strong hands-on experience in Behave-based unit testing, PySpark development, DeltaLake optimisation, and Azure cloud services. This role focusses on designing and deploying scalable data processing solutions in a containerised environment, emphasising maintainable, configurable, and test-driven code delivery.Key Responsibilities Develop and maintain data ingestion … transformation, and validation pipelines using Python and PySpark. Implement unit and behavior-driven testing with Behave, ensuring robust mocking and patching of dependencies. Design and maintain DeltaLake tables for optimised query performance, ACID compliance, and incremental data loads. Build and manage containerised environments using Docker for consistent development, testing, and deployment. Develop configurable, parameter-driven codebases to … support modular and reusable data solutions. Integrate Azure services, including: Azure Functions for serverless transformation logic Azure Key Vault for secure credential management Azure Blob Storage for data lake operations What We're Looking For Proven experience in Python, PySpark, and Delta Lake. SC Cleared Strong knowledge of Behave for test-driven development. Experience with Docker and containerised More ❯
About the RoleWe are looking for a Python Data Engineer with strong hands-on experience in Behave-based unit testing, PySpark development, DeltaLake optimisation, and Azure cloud services. This role focusses on designing and deploying scalable data processing solutions in a containerised environment, emphasising maintainable, configurable, and test-driven code delivery. Key Responsibilities Develop and maintain data … ingestion, transformation, and validation pipelines using Python and PySpark. Implement unit and behavior-driven testing with Behave, ensuring robust mocking and patching of dependencies. Design and maintain DeltaLake tables for optimised query performance, ACID compliance, and incremental data loads. Build and manage containerised environments using Docker for consistent development, testing, and deployment. Develop configurable, parameter-driven codebases … to support modular and reusable data solutions. Integrate Azure services, including: Azure Functions for serverless transformation logic Azure Key Vault for secure credential management Azure Blob Storage for data lake operations What We're Looking For Proven experience in Python, PySpark, and Delta Lake. SC Cleared Strong knowledge of Behave for test-driven development. Experience with Docker and More ❯
to capture Cloud + Databricks needs. Define security, compliance, downtime tolerance, RPO/RTO, SLAs, and cost requirements. Capture data platform requirements across ingestion, transformation, governance, and analytics (Databricks, DeltaLake, Unity Catalog, Workflows). Map service and data dependencies, classify criticality, and align to the Core Cloud capability catalogue. Produce a clear, endorsed baseline of Core Cloud … BA within cloud or data platform programmes (Azure + Databricks ideal). Experience working with AWS tech stack Strong experience gathering technical, data, and platform requirements. Understanding of Databricks (DeltaLake, Unity Catalog, governance, clusters, pipelines). Comfortable engaging technical and non-technical stakeholders; strong documentation skills. Nice to Have: Data platform migration experience; exposure to FinOps; agile More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricksworkflows Implement … tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, DeltaLake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse/SQL Pools Azure Key More ❯
architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricks workflows … tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, DeltaLake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse/SQL Pools Azure Key More ❯
data into trusted, actionable insights that power critical business decisions. Key Responsibilities Design and implement scalable data pipelines and ETL/ELT workflows in Databricks using PySpark, SQL, and Delta Lake. Architect and manage the Medallion (Bronze, Silver, Gold) data architecture for optimal data organization, transformation, and consumption. Develop and maintain data models, schemas, and data quality frameworks across … emerging technologies in cloud data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, DeltaLake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design … and data lakehouse concepts. Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks Unity Catalog More ❯
you love solving complex data challenges and building scalable solutions, this is your chance to make an impact. What You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards ETL & Data Modelling: T-SQL, metadata-driven pipelines DevOps: CI/CD Bonus: Python What you'll do More ❯
you love solving complex data challenges and building scalable solutions, this is your chance to make an impact. What You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards ETL & Data Modelling: T-SQL, metadata-driven pipelines DevOps: CI/CD Bonus: Python What you'll do More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
real impact. You'll work with cutting-edge technology and stay at the forefront of the data engineering field. You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards and analytics ETL & Data Modelling: T-SQL, metadata-driven pipelines Design and implement scalable Azure-based data solutions More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
real impact. You'll work with cutting-edge technology and stay at the forefront of the data engineering field. You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards and analytics ETL & Data Modelling: T-SQL, metadata-driven pipelines Design and implement scalable Azure-based data solutions More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
make an impact. Working with the latest technology, ensuring you can be at the forefront of your field. What You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards ETL & Data Modelling: T-SQL, metadata-driven pipelines What you'll do Design and implement scalable Azure-based More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, DeltaLake, Spark SQL, and related services. Apply best practices for development, deployment, and workload optimization. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from relational databases More ❯
your skills in analytics engineering, responding to business and project needs rather than operating as a narrow silo. You'll work hands-on with Azure Databricks, Azure Data Factory, DeltaLake, and Power BI to create scalable data models, automated pipelines, and self-service analytics capabilities. This is a fantastic opportunity to join a newly created team, work More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, DeltaLake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from multiple sources and formats (CSV … JSON, Parquet, Delta) for downstream analytics, dashboards, and reporting. Apply Azure Purview for governance and quality checks. Monitor pipelines, resolve issues, and enhance data quality processes. Work closely with engineers, data scientists, and stakeholders. Participate in code reviews and clearly communicate technical concepts. Develop CI/CD pipelines for deployments and automate data engineering workflows using DevOps principles. Interested More ❯
data models (star schema, snowflake schema, Data Vault) supporting analytics and BI Define data strategy and governance frameworks – lineage, cataloging, security, compliance Lead solution design for data warehouse, data lake, and lakehouse implementations Architect real-time and batch data integration patterns across hybrid environments Technical Leadership: Lead technical workstreams on large transformation and migration programmes Define DataOps standards – CI … Vault, logical/physical design Snowflake Cloud Data Platform - Architecture design, performance tuning, cost optimization, governance Azure Data Factory - Pipeline architecture, orchestration patterns, best practices Azure Databricks - Lakehouse architecture, DeltaLake, Unity Catalog, medallion layers SQL & Python - Strong technical foundation for hands-on guidance and code reviews DataOps & CI/CD - GitHub/Azure DevOps, automated … deployments, version control, testing frameworks Architecture Experience: Enterprise architecture and solution design (6-12+ years experience) Data warehouse and data lake architecture patterns Migration and modernization programmes (on-prem to cloud) Data governance frameworks (security, lineage, quality, cataloging) Performance tuning and cost optimization strategies More ❯
Warrington, Cheshire, England, United Kingdom Hybrid/Remote Options
Brookson
Science, Mathematics, Engineering or other STEM A strong team player with empathy, humility and dedication to joint success and shared development. Desirable Experience and Qualifications: Experience with Databricks or DeltaLake architecture. Experience building architecture and Data Warehousing within the Microsoft Stack Experience in development Source control (e.g. Bit Bucket, Github) Experience in Low Code Analytical Tools (e.g. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Vermillion Analytics
it ran successfully" Get a kick out of making complex data architectures simple and elegant Be able to explain technical decisions to non-technical humans Bonus points: Experience with DeltaLake/Iceberg, real-time streaming, or LLM orchestration What's on offer: Work on genuinely interesting problems (behavioural + financial data = never boring) Shape the data strategy More ❯