Senior Azure Data Engineer (Databricks, Delta Live Tables, Unity Catalog) 3 - 6 months Remote (UK Candidates Only) £550 - £600 per day Outside IR35 We are currently recruiting for an experienced Data Engineer skilled in Microsoft Azure and cloud computing concepts. As the Azure Data Engineer, you will work closely with a Microsoft & Databricks partner with responsibilities of end-to … sales, technical architecture or consulting role Experience working on Big Data Architectures independently Comfortable writing code in Python Experience working across Azure including Azure Data Factory, Azure Synapse, Azure DeltaLake Storage, DeltaLake etc Experience with Purview, Unity Catalog etc Experience with streaming Data in Kakfa/Event Hubs/Stream Analytics etc Experience working More ❯
Bournemouth, Dorset, South West, United Kingdom Hybrid/Remote Options
Sanderson Recruitment
Databricks Engineer: Key Responsibilities Build and maintain Databricks pipelines (batch and incremental) using PySpark and SQL. Orchestrate end-to-end workflows with Azure Data Factory . Develop and optimise DeltaLake tables (partitioning, schema evolution, vacuuming). Implement Medallion Architecture (Bronze, Silver, Gold) for transforming raw data into business-ready datasets. Apply robust monitoring, logging, and error-handling … Engineer: About You Strong PySpark development skills for large-scale data engineering. Proven experience with Databricks pipelines and workflow management. Expertise in Azure Data Factory orchestration. Solid knowledge of DeltaLake and Lakehouse principles. Hands-on experience with SQL for data transformation. Familiarity with Azure services (ADLS/Blob, Key Vault, SQL). Knowledge of ETL/ELT More ❯
data governance across modern cloud environments.Key Responsibilities Design, build, and maintain scalable data pipelines using Databricks Notebooks, Jobs, and Workflows for both batch and streaming data. Optimise Spark and DeltaLake performance through efficient cluster configuration, adaptive query execution, and caching strategies. Conduct performance testing and cluster tuning to ensure cost-efficient, high-performing workloads. Implement data quality … control policies aligned with Databricks Unity Catalogue and governance best practices. Develop PySpark applications for ETL, data transformation, and analytics, following modular and reusable design principles. Create and manage DeltaLake tables with ACID compliance, schema evolution, and time travel for versioned data management. Integrate Databricks solutions with Azure services such as Azure Data Lake Storage, Key … Vault, and Azure Functions. What We're Looking For Proven experience with Databricks, PySpark, and Delta Lake. Strong understanding of workflow orchestration, performance optimisation, and data governance. Hands-on experience with Azure cloud services. Ability to work in a fast-paced environment and deliver high-quality solutions. SC Cleared candidates If you're interested in this role, click 'apply More ❯
Data Engineer/Data Engineering/Data Consultant/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area, Permanent role, £50,000 - 70,000 + car/allowance (£5,000) + 15% bonus. One of our leading clients … + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
systems, deploying LLMs, and operationalizing models in production. Key Responsibilities: Design, develop, and deploy ML, Deep Learning, and LLM solutions. Implement scalable ML and data pipelines in Databricks (PySpark, DeltaLake, MLflow). Build automated MLOps pipelines with model tracking, CI/CD, and registry. Deploy and operationalize LLMs , including fine-tuning, prompt optimization, and monitoring. Architect secure … Mentor engineers, enforce best practices, and lead design/architecture reviews. Required Skills & Experience: 5+ years in ML/AI solution development. Recent hands-on experience with Databricks, PySpark, DeltaLake, MLflow . Experience with LLMs (Hugging Face, LangChain, Azure OpenAI) . Strong MLOps, CI/CD, and model monitoring experience. Proficiency in Python, PyTorch/TensorFlow, FastAPI More ❯
About the RoleWe are looking for a Python Data Engineer with strong hands-on experience in Behave-based unit testing, PySpark development, DeltaLake optimisation, and Azure cloud services. This role focusses on designing and deploying scalable data processing solutions in a containerised environment, emphasising maintainable, configurable, and test-driven code delivery.Key Responsibilities Develop and maintain data ingestion … transformation, and validation pipelines using Python and PySpark. Implement unit and behavior-driven testing with Behave, ensuring robust mocking and patching of dependencies. Design and maintain DeltaLake tables for optimised query performance, ACID compliance, and incremental data loads. Build and manage containerised environments using Docker for consistent development, testing, and deployment. Develop configurable, parameter-driven codebases to … support modular and reusable data solutions. Integrate Azure services, including: Azure Functions for serverless transformation logic Azure Key Vault for secure credential management Azure Blob Storage for data lake operations What We're Looking For Proven experience in Python, PySpark, and Delta Lake. SC Cleared Strong knowledge of Behave for test-driven development. Experience with Docker and containerised More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricksworkflows Implement … tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, DeltaLake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse/SQL Pools Azure Key More ❯
you love solving complex data challenges and building scalable solutions, this is your chance to make an impact. What You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards ETL & Data Modelling: T-SQL, metadata-driven pipelines DevOps: CI/CD Bonus: Python What you'll do More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
real impact. You'll work with cutting-edge technology and stay at the forefront of the data engineering field. You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards and analytics ETL & Data Modelling: T-SQL, metadata-driven pipelines Design and implement scalable Azure-based data solutions More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
make an impact. Working with the latest technology, ensuring you can be at the forefront of your field. What You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards ETL & Data Modelling: T-SQL, metadata-driven pipelines What you'll do Design and implement scalable Azure-based More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, DeltaLake, Spark SQL, and related services. Apply best practices for development, deployment, and workload optimization. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from relational databases More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, DeltaLake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from multiple sources and formats (CSV … JSON, Parquet, Delta) for downstream analytics, dashboards, and reporting. Apply Azure Purview for governance and quality checks. Monitor pipelines, resolve issues, and enhance data quality processes. Work closely with engineers, data scientists, and stakeholders. Participate in code reviews and clearly communicate technical concepts. Develop CI/CD pipelines for deployments and automate data engineering workflows using DevOps principles. Interested More ❯
data models (star schema, snowflake schema, Data Vault) supporting analytics and BI Define data strategy and governance frameworks – lineage, cataloging, security, compliance Lead solution design for data warehouse, data lake, and lakehouse implementations Architect real-time and batch data integration patterns across hybrid environments Technical Leadership: Lead technical workstreams on large transformation and migration programmes Define DataOps standards – CI … Vault, logical/physical design Snowflake Cloud Data Platform - Architecture design, performance tuning, cost optimization, governance Azure Data Factory - Pipeline architecture, orchestration patterns, best practices Azure Databricks - Lakehouse architecture, DeltaLake, Unity Catalog, medallion layers SQL & Python - Strong technical foundation for hands-on guidance and code reviews DataOps & CI/CD - GitHub/Azure DevOps, automated … deployments, version control, testing frameworks Architecture Experience: Enterprise architecture and solution design (6-12+ years experience) Data warehouse and data lake architecture patterns Migration and modernization programmes (on-prem to cloud) Data governance frameworks (security, lineage, quality, cataloging) Performance tuning and cost optimization strategies More ❯
Warrington, Cheshire, England, United Kingdom Hybrid/Remote Options
Brookson
Science, Mathematics, Engineering or other STEM A strong team player with empathy, humility and dedication to joint success and shared development. Desirable Experience and Qualifications: Experience with Databricks or DeltaLake architecture. Experience building architecture and Data Warehousing within the Microsoft Stack Experience in development Source control (e.g. Bit Bucket, Github) Experience in Low Code Analytical Tools (e.g. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Vermillion Analytics
it ran successfully" Get a kick out of making complex data architectures simple and elegant Be able to explain technical decisions to non-technical humans Bonus points: Experience with DeltaLake/Iceberg, real-time streaming, or LLM orchestration What's on offer: Work on genuinely interesting problems (behavioural + financial data = never boring) Shape the data strategy More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Prospect Us
pipelines, governance, and quality. Work closely with stakeholders to translate operational and strategic needs into clear data solutions. Promote modern engineering standards including Git, CI/CD, IaC, and Delta Lake. Ensure robust handling of complex and sensitive datasets from varied sources. About You You will have extensive hands-on experience in data engineering and data architecture as well More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Hays Specialist Recruitment Limited
and Azure experience to deliver and optimise Data engineering solutions. Key requirements: Proven experience as a Data Engineer with Active Security Clearance (SC) Strong experience with Databricks, PySpark and DeltaLake Expertise in Jobs & Workflows, cluster tuning, and performance optimisation Solid understanding of Data governance (Unity Catalog, Lineage, Access Policies) Hands-on with Azure services: Data LakeMore ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
practices in data governance, security, and compliance. Key Skills & Experience* Proven experience as an Azure Data Engineer.* Strong hands-on expertise with Databricks - 5+ years experience (PySpark, notebooks, clusters, DeltaLake).* Solid knowledge of Azure services (Data Lake, Synapse, Data Factory, Event Hub).* Experience working with DevOps teams and CI/CD pipelines.* Ability to More ❯
Bristol, Avon, South West, United Kingdom Hybrid/Remote Options
IO Associates
data pipelines and analytics solutions using Databricks in a secure environment. Collaborate with data specialists to deliver efficient, high-quality solutions. Critical Skills Extensive experience with Databricks (including Spark, DeltaLake, and MLflow). Proficiency in ETL/ELT development and orchestration tools (DBT, Airflow, or similar). Hands-on experience with cloud platforms (AWS, Azure, or GCP More ❯
via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, DeltaLake, metadata-driven pipelines. Unity Catalog/Purview experience for lineage and governance. RLS/OLS implementation experience. More ❯
via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, DeltaLake, metadata-driven pipelines. Unity Catalog/Purview experience for lineage and governance. RLS/OLS implementation experience. More ❯
progression opportunities across Group, which includes several high profile household name What you'll bring: Strong Python & TDD skills Expertise in modern data architecture (dimensional modelling, data mesh, data lake) & best practices (agile, CI/CD, IaC, observability). Experience with Cloud and big data technologies (e.g. Spark/Databricks/DeltaLake/BigQuery). Familiarity … with eventing technologies (e.g. Event Hubs/Kafka) and file formats such as Parquet/Delta/Iceberg. Want to learn more? Get in touch for an informal chat. More ❯