and optimise Spark-based data pipelines for batch and streaming workloads Develop Fabric dataflows, pipelines, and semantic models Implement complex transformations, joins, aggregations and performance tuning Build and optimise DeltaLake/delta tables Develop secure data solutions including role-based access, data masking and compliance controls Implement data validation, cleansing, profiling and documentation Work closely with … on experience with PySpark, Spark SQL, Spark Streaming, DataFrames Microsoft Fabric (Fabric Spark jobs, dataflows, pipelines, semantic models) Azure: ADLS, cloud data engineering, notebooks Python programming; Java exposure beneficial DeltaLake/Delta table optimisation experience Git/GitLab, CI/CD pipelines, DevOps practices Strong troubleshooting and problem-solving ability Experience with lakehouse architectures, ETL workflows … and distributed computing Familiarity with time-series, market data, transactional data or risk metrics Nice to Have Power BI dataset preparation OneLake, Azure Data Lake, Kubernetes, Docker Knowledge of financial regulations (GDPR, SOX) Details Location: London (office-based) Type: Contract Duration: 6 months Start: ASAP Rate: Market rates If you are a PySpark/Fabric/Azure Data Engineer More ❯
Bournemouth, Dorset, South West, United Kingdom Hybrid/Remote Options
Sanderson Recruitment
Databricks Engineer: Key Responsibilities Build and maintain Databricks pipelines (batch and incremental) using PySpark and SQL. Orchestrate end-to-end workflows with Azure Data Factory . Develop and optimise DeltaLake tables (partitioning, schema evolution, vacuuming). Implement Medallion Architecture (Bronze, Silver, Gold) for transforming raw data into business-ready datasets. Apply robust monitoring, logging, and error-handling … Engineer: About You Strong PySpark development skills for large-scale data engineering. Proven experience with Databricks pipelines and workflow management. Expertise in Azure Data Factory orchestration. Solid knowledge of DeltaLake and Lakehouse principles. Hands-on experience with SQL for data transformation. Familiarity with Azure services (ADLS/Blob, Key Vault, SQL). Knowledge of ETL/ELT More ❯
systems, deploying LLMs, and operationalizing models in production. Key Responsibilities: Design, develop, and deploy ML, Deep Learning, and LLM solutions. Implement scalable ML and data pipelines in Databricks (PySpark, DeltaLake, MLflow). Build automated MLOps pipelines with model tracking, CI/CD, and registry. Deploy and operationalize LLMs , including fine-tuning, prompt optimization, and monitoring. Architect secure … Mentor engineers, enforce best practices, and lead design/architecture reviews. Required Skills & Experience: 5+ years in ML/AI solution development. Recent hands-on experience with Databricks, PySpark, DeltaLake, MLflow . Experience with LLMs (Hugging Face, LangChain, Azure OpenAI) . Strong MLOps, CI/CD, and model monitoring experience. Proficiency in Python, PyTorch/TensorFlow, FastAPI More ❯
Data Engineer/Data Engineering/Data Consultant/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area, Permanent role, £50,000 - 70,000 + car/allowance (£5,000) + 15% bonus. One of our leading clients … + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
to capture Cloud + Databricks needs. Define security, compliance, downtime tolerance, RPO/RTO, SLAs, and cost requirements. Capture data platform requirements across ingestion, transformation, governance, and analytics (Databricks, DeltaLake, Unity Catalog, Workflows). Map service and data dependencies, classify criticality, and align to the Core Cloud capability catalogue. Produce a clear, endorsed baseline of Core Cloud … BA within cloud or data platform programmes (Azure + Databricks ideal). Experience working with AWS tech stack Strong experience gathering technical, data, and platform requirements. Understanding of Databricks (DeltaLake, Unity Catalog, governance, clusters, pipelines). Comfortable engaging technical and non-technical stakeholders; strong documentation skills. Nice to Have: Data platform migration experience; exposure to FinOps; agile More ❯
Nottingham, England, United Kingdom Hybrid/Remote Options
Nottingham Building Society
of emerging features and help shape the Society’s long-term data strategy. About you - Extensive Technical Expertise: Strong knowledge of Microsoft Fabric components including OneLake, Lakehouse/Warehouse, DeltaLake, Direct Lake, Data Factory, Spark (PySpark/Scala) and Power BI (DAX and semantic modelling). Advanced Programming and Data Engineering Skills: Proficient in Python, SQL More ❯
data into trusted, actionable insights that power critical business decisions. Key Responsibilities Design and implement scalable data pipelines and ETL/ELT workflows in Databricks using PySpark, SQL, and Delta Lake. Architect and manage the Medallion (Bronze, Silver, Gold) data architecture for optimal data organization, transformation, and consumption. Develop and maintain data models, schemas, and data quality frameworks across … emerging technologies in cloud data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, DeltaLake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design … and data lakehouse concepts. Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks Unity Catalog More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricksworkflows Implement … tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, DeltaLake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse/SQL Pools Azure Key More ❯
architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricks workflows … tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, DeltaLake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse/SQL Pools Azure Key More ❯
you love solving complex data challenges and building scalable solutions, this is your chance to make an impact. What You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards ETL & Data Modelling: T-SQL, metadata-driven pipelines DevOps: CI/CD Bonus: Python What you'll do More ❯
you love solving complex data challenges and building scalable solutions, this is your chance to make an impact. What You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards ETL & Data Modelling: T-SQL, metadata-driven pipelines DevOps: CI/CD Bonus: Python What you'll do More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and Apache Spark, including performance tuning and advanced concepts such as DeltaLake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing and CI/CD pipelines Advanced knowledge of SQL and More ❯
Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, DeltaLake, Spark SQL, and related services. Apply best practices for development, deployment, and workload optimization. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from relational databases More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, DeltaLake, Spark SQL, and related services. Apply best practices for development, deployment, and workload optimization. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from relational databases More ❯
your skills in analytics engineering, responding to business and project needs rather than operating as a narrow silo. You'll work hands-on with Azure Databricks, Azure Data Factory, DeltaLake, and Power BI to create scalable data models, automated pipelines, and self-service analytics capabilities. This is a fantastic opportunity to join a newly created team, work More ❯
sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, DeltaLake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from multiple sources and formats (CSV … JSON, Parquet, Delta) for downstream analytics, dashboards, and reporting. Apply Azure Purview for governance and quality checks. Monitor pipelines, resolve issues, and enhance data quality processes. Work closely with engineers, data scientists, and stakeholders. Participate in code reviews and clearly communicate technical concepts. Develop CI/CD pipelines for deployments and automate data engineering workflows using DevOps principles. Interested More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, DeltaLake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from multiple sources and formats (CSV … JSON, Parquet, Delta) for downstream analytics, dashboards, and reporting. Apply Azure Purview for governance and quality checks. Monitor pipelines, resolve issues, and enhance data quality processes. Work closely with engineers, data scientists, and stakeholders. Participate in code reviews and clearly communicate technical concepts. Develop CI/CD pipelines for deployments and automate data engineering workflows using DevOps principles. Interested More ❯
data models (star schema, snowflake schema, Data Vault) supporting analytics and BI Define data strategy and governance frameworks – lineage, cataloging, security, compliance Lead solution design for data warehouse, data lake, and lakehouse implementations Architect real-time and batch data integration patterns across hybrid environments Technical Leadership: Lead technical workstreams on large transformation and migration programmes Define DataOps standards – CI … Vault, logical/physical design Snowflake Cloud Data Platform - Architecture design, performance tuning, cost optimization, governance Azure Data Factory - Pipeline architecture, orchestration patterns, best practices Azure Databricks - Lakehouse architecture, DeltaLake, Unity Catalog, medallion layers SQL & Python - Strong technical foundation for hands-on guidance and code reviews DataOps & CI/CD - GitHub/Azure DevOps, automated … deployments, version control, testing frameworks Architecture Experience: Enterprise architecture and solution design (6-12+ years experience) Data warehouse and data lake architecture patterns Migration and modernization programmes (on-prem to cloud) Data governance frameworks (security, lineage, quality, cataloging) Performance tuning and cost optimization strategies More ❯
Warrington, Cheshire, England, United Kingdom Hybrid/Remote Options
Brookson
Science, Mathematics, Engineering or other STEM A strong team player with empathy, humility and dedication to joint success and shared development. Desirable Experience and Qualifications: Experience with Databricks or DeltaLake architecture. Experience building architecture and Data Warehousing within the Microsoft Stack Experience in development Source control (e.g. Bit Bucket, Github) Experience in Low Code Analytical Tools (e.g. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
practices in data governance, security, and compliance. Key Skills & Experience* Proven experience as an Azure Data Engineer.* Strong hands-on expertise with Databricks - 5+ years experience (PySpark, notebooks, clusters, DeltaLake).* Solid knowledge of Azure services (Data Lake, Synapse, Data Factory, Event Hub).* Experience working with DevOps teams and CI/CD pipelines.* Ability to More ❯
via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, DeltaLake, metadata-driven pipelines. Unity Catalog/Purview experience for lineage and governance. RLS/OLS implementation experience. More ❯
via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, DeltaLake, metadata-driven pipelines. Unity Catalog/Purview experience for lineage and governance. RLS/OLS implementation experience. More ❯
Python for data engineering tasks. Familiarity with GitLab for version control and CI/CD. Strong understanding of unit testing and data validation techniques. Preferred Qualifications: Experience with Databricks DeltaLake, Unity Catalog, and MLflow. Knowledge of CloudFormation or other infrastructure-as-code tools. AWS or Databricks certifications. Experience in large-scale data migration projects. Background in Finance More ❯