spoke data architectures , optimising for performance, scalability, and security. Collaborate with business stakeholders, data engineers, and analytics teams to ensure solutions are fit for purpose. Implement and optimise Databricks DeltaLake, Medallion Architecture, and Lakehouse patterns for structured and semi-structured data. Ensure best practices in Azure networking, security, and federated data access . Key Skills & Experience 5+ … spoke data architectures , optimising for performance, scalability, and security. Collaborate with business stakeholders, data engineers, and analytics teams to ensure solutions are fit for purpose. Implement and optimise Databricks DeltaLake, Medallion Architecture, and Lakehouse patterns for structured and semi-structured data. Ensure best practices in Azure networking, security, and federated data access . Key Skills & Experience 5+ … years of experience in data architecture, solution architecture, or cloud data engineering roles. Strong expertise in Azure data services , including: Azure Databricks (DeltaLake, Unity Catalog, MLflow) Azure Event Hub and Azure Data Explorer for real-time and streaming data pipelines Azure Storage (ADLS Gen2, Blob Storage) for scalable data lakes Azure Purview or equivalent for data governance More ❯
Data Science, Analytics, and DevOps teams to align operational strategies with technical and business requirements. Optimize operational performance and cost management for services including Azure Data Factory, Azure Databricks, DeltaLake, and Azure Data Lake Storage. Serve as the domain expert in DataOps by providing strategic guidance, mentoring colleagues, and driving continuous process improvements. What you will … and driving automation of data workflows within the Microsoft Azure ecosystem. Hands-on expertise with Azure Data Platform with components such as Azure Data Factory, Azure Databricks, Azure Data Lake Storage, DeltaLake, Azure SQL, Purview and APIM. Proficiency in developing CI/CD data pipelines and strong programming skills in Python, SQL, Bash, and PySpark for More ❯
Leeds, West Yorkshire, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
for someone with strong Databricks expertise to join their team. About the role: Designing and developing robust data pipelines using Azure Data Factory, Databricks, and Synapse Analytics. Working with DeltaLake and Azure Data Lake Storage to manage and optimise large datasets. Collaborating with data analysts, engineers, and business stakeholders to deliver clean, reliable data. Supporting the … of legacy systems to a modern Azure-based architecture Ensuring best practices in data governance, security, and performance tuning Requirements: Proven experience with Azure Data Services (ADF, Synapse, Data Lake) Strong hands-on experience with Databricks (including PySpark or SQL) Solid SQL skills and understanding of data modelling and ETL/ELT processes Familiarity with DeltaLakeMore ❯
key role in the design and delivery of advanced Databricks solutions within the Azure ecosystem. Responsibilities: Design, build, and optimise end-to-end data pipelines using Azure Databricks, including Delta Live Tables. Collaborate with stakeholders to define technical requirements and propose Databricks-based solutions. Drive best practices for data engineering. Help clients realise the potential of data science, machine … Support with planning, requirements refinements, and work estimation. Skills & Experiences: Proven experience designing and implementing data solutions in Azure using Databricks as a core platform. Hands-on expertise in DeltaLake, Delta Live Tables and Databricks Workflows. Strong coding skills in Python and SQL, with experience in developing modular, reusable code in Databricks. Deep understanding of lakehouse More ❯
a high-performing and secure environment. The role reports to a project delivery lead and works closely with internal technical teams. Key Responsibilities: Design and implement Databricks Lakehouse architecture (DeltaLake, Unity Catalog, etc.) Develop ETL/ELT pipelines using Spark, Python, SQL, and Databricks workflows Integrate with Azure services and BI tools (e.g., Power BI) Optimise performance … and support CI/CD and MLOps pipelines Enable knowledge transfer through code reviews, training, and reusable templates Key Skill s: In-depth experience with Databricks (DeltaLake, Unity Catalog, Lakehouse architecture). Strong knowledge of Azure services (e.g. Data Lake, Data Factory, Synapse). Solid hands-on skills in Spark, Python, PySpark, and SQL. Understanding of More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
MYO Talent
Data Engineer/Data Engineering/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/ML/Machine Learning/AI/Artificial Intelligence/Based in the West Midlands/Solihull/Birmingham area, Permanent role, £50,000 70,000 + car/allowance + bonus. One … Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks ML/Machine Learning/AI/Artificial Intelligence Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
on coding experience with Python or PySpark Proven expertise in building data pipelines using Azure Data Factory or Fabric Pipelines Solid experience with Azure technologies like Lakehouse Architecture, Data Lake, DeltaLake, and Azure Synapse Strong command of SQL Excellent communication and collaboration skills What's in It for You: Up to £60,000 salary depending on More ❯
on coding experience with Python or PySpark Proven expertise in building data pipelines using Azure Data Factory or Fabric Pipelines Solid experience with Azure technologies like Lakehouse Architecture, Data Lake, DeltaLake, and Azure Synapse Strong command of SQL Excellent communication and collaboration skills What's in It for You: Up to £60,000 salary depending on More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
MYO Talent
Data Engineer/Data Engineering/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area (1 day per week), Permanent role, £50,000 70,000 + car/allowance + bonus. One of our leading clients is looking … + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL CDC Stream Processing Database Design ML Python/PySpark Azure Blob Storage Parquet Azure Data Factory Desirable: Any exposure working in a software house, consultancy, retail More ❯
data estate, built primarily on Microsoft Azure and Databricks Key Responsibilities: Design and implement scalable, secure cloud-based data architecture (Azure & Databricks) Develop optimised data models and pipelines using DeltaLake and Azure services Define data standards, policies, and governance practices aligned with compliance Enable real-time analytics and machine learning use cases across business functions Ensure data … engineering and architecture Collaborate with internal stakeholders and third-party vendors Key Skills & Experience: Proven background designing and delivering enterprise-scale data platforms Strong knowledge of Microsoft Azure, Databricks, DeltaLake, and data warehousing Advanced data modelling and ETL/ELT optimisation experience Familiarity with regulatory frameworks such as IFRS 17 , BCBS 239 , and UK Data Protection Excellent More ❯
Azure-based data solutions, with a minimum of 5 years' hands-on experience in Azure implementations. Strong technical expertise across key Azure services, including Azure Data Factory, Databricks, Data Lake, DeltaLake, Synapse Analytics, Power BI, Key Vault, Automation Account, PowerShell, SQL Database, and broader Big Data platforms. Comprehensive understanding of the Azure ecosystem and its architectural More ❯
and Lakehouse architectures to ensure our data solutions are secure, efficient, and optimized. Key Responsibilities: Design and implement data solutions using Azure services, including Azure Databricks, ADF, and Data Lake Storage. Develop and maintain ETL/ELT pipelines to process structured and unstructured data from multiple sources. - Automate loads using Databricks workflows and Jobs Develop, test and build CI … modeling, warehousing, and real-time streaming. Knowledge of developing and processing full and incremental loads. Experience of automated loads using Databricks workflows and Jobs Expertise in Azure Databricks, including DeltaLake, Spark optimizations, and MLflow. Strong experience with Azure Data Factory (ADF) for data integration and orchestration. Hands-on experience with Azure DevOps, including pipelines, repos, and infrastructure More ❯
AWS, GCP) including AWS primitives such as IAM, S3, RDS, EMR, ECS and more Advanced experience working and understanding the tradeoffs of at least one of the following Data Lake table/file formats: DeltaLake, Parquet, Iceberg, Hudi Previous h ands-on expertise with Spark Experience working with containerisation technologies - Docker, Kubernetes Streaming Knowledge: Experience with More ❯
. Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as Data Lake, Synapse, Event Hubs, and Functions. Authoring reports and dashboards with either open source or commercial products (e.g. PowerBI, Plot.ly, matplotlib) Programming OOP DevOps Web technologies HTTP/S REST … APIs Experience with time-series databases (e.g., InfluxDB, kdb+, TimescaleDB) and real-time data processing. Familiarity with distributed computing and data warehousing technologies (e.g., Spark, Snowflake, DeltaLake). Strong understanding of data governance, master data management, and data quality frameworks. Solid grasp of web technologies and APIs (REST, JSON, XML, authentication protocols). Experience with DevOps practices More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
years of experience in a data engineering or similar technical role Hands-on experience with key Microsoft Azure services: Azure Data Factory Azure Synapse Analytics Azure Data Lake Storage Azure SQL Database Solid understanding of data modeling, ETL/ELT, and warehousing concepts Proficiency in SQL and one or more programming languages (e.g., Python, Scala) Exposure to Microsoft Fabric … Familiarity with software testing methodologies and development team collaboration Experience working with Power BI and DAX Strong documentation, communication, and stakeholder engagement skills Preferred Qualifications: Experience with Lakehouse architecture, DeltaLake, or Databricks Exposure to Agile/Scrum working practices Microsoft certifications (e.g., Azure Data Engineer Associate) Background in consulting or professional services Understanding of data governance and More ❯
Engineer on cloud-based projects Strong hands-on skills with Databricks , Apache Spark , and Python or Scala Proficient in SQL and working with large-scale data environments Experience with DeltaLake , Azure Data Lake , or similar technologies Familiarity with version control, CI/CD, and infrastructure-as-code tools Ability to deliver independently in a contractor capacity More ❯
. Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as Data Lake, Synapse, Event Hubs, and Functions. Authoring reports and dashboards with either open source or commercial products (e.g. PowerBI , Plot.ly, matplotlib) Programming OOP DevOps Application development lifecycle Web technologies HTTP … CSV, JSON, XML, Parquet) Experience with time-series databases (e.g., InfluxDB, kdb+, TimescaleDB) and real-time data processing. Familiarity with distributed computing and data warehousing technologies (e.g., Spark, Snowflake, DeltaLake). Strong understanding of data governance, master data management, and data quality frameworks. Excellent communication and stakeholder management skills. Ability to mentor junior engineers and foster a More ❯
and optimise data pipelines on our Azure platform to power analytics, dashboards, and AI. You'll also help drive our move to a modern Lakehouse architecture with Databricks and Delta/Iceberg Table format, and help shape our future integration strategy. To view the full Job Description, please Click Here Who we are looking for: You have strong SQL … and Python skills, solid data engineering fundamentals, and experience delivering pipelines in a cloud environment (Azure, AWS, or GCP). Experience with Azure Data Factory, Synapse, or Data Lake Gen2 is beneficial, as is exposure to Databricks, DeltaLake, Iceberg, CI/CD, Power BI, or middleware/integration tools. Above all, you can deliver reliable, well More ❯
Team Valley Trading Estate, Gateshead, Tyne and Wear, England, United Kingdom
Nigel Wright Group
modelling and downstream integration into Power BI. Other key responsibilities will include: Build scalable ETL/ELT pipelines using Azure Data Factory and Synapse Model and store data using DeltaLake Contribute to CI/CD workflows for data pipelines Support data APIs from platforms like Intelligent Office, AJ Bell etc Document architecture, data flows and best practises More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
Amtis Professional Ltd
improve data infrastructure Explore AI-driven enhancements to boost data accuracy and productivity Requirements: Strong experience with: Azure Databricks, Data Factory, Blob Storage Python/PySpark SQL Server, Parquet, DeltaLake Deep understanding of: ETL/ELT, CDC, stream processing Lakehouse architecture and data warehousing Scalable pipeline design and database optimisation A proactive mindset, strong problem-solving skills More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the More ❯
Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, Apache Spark, DeltaLake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator with experience More ❯