Middlesbrough, Cleveland, England, United Kingdom Hybrid / WFH Options
Reed
We are on the lookout for a Data Engineer with a robust background in SQL Server and ideally Azure Data Factory. The ideal candidate will have a wealth of experience in business intelligence, data modelling, and visualisation, and will be More ❯
Summary: Join a team building a modern Azure-based data platform. This hands-on engineering role involves designing and developing scalable, automated data pipelines using tools like Data Factory, Databricks, Synapse, and Logic Apps. You'll work across the full data lifecycle - from ingestion to transformation and delivery - enabling smarter, faster insights. Key Responsibilities: * Develop and maintain data pipelines using … Collaborate with cross-functional teams in an agile environment. Collaboration With: * Data Engineers, Architects, Product Owners, Test Analysts, and BI Teams. Skills & Experience: * Proficiency in Azure tools (Data Factory, Databricks, Synapse, etc.). * Strong SQL and experience with data warehousing (Kimball methodology). * Programming skills in Python, Scala, or PySpark. * Familiarity with Power BI, SharePoint, and data integration technologies. * Understanding More ❯
Sheffield, South Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Fdo Consulting Limited
Databricks Data Engineer (roles available at Lead, Senior and Mid Level), £ 50000 - 85000 + benefits. SQL, ETL, Data Warehousing, Databricks etc. Home Based with one day a month at the office in Nottingham. Strong commercial knowledge of Databricks is required for this role. Expanding SaaS product company are looking for a number of Data Engineer as they continue to grow. … Data Engineers and BA's to understand the data needs of the business. Skills Required Include - Previous experience as Data Engineer in a delivery focused environment. Excellent knowledge of Databricks Experience analysing complex business problems and designing workable technical solutions. Excellent knowledge of the SDLC, including testing and delivery in an agile environment. Excellent knowledge of SQL and ETL Experience … expand its data team. In these roles you will use your technical skills and soft skills/people skills allowing the data team to further develop. Strong, hands-on databricks skills are mandatory for this role. This role is home based with one day a month at their office in Nottingham. Salary is in the range £ 50000 - 85000 + benefits More ❯
decision making for Cox Automotive. You'll collaborate with a talented team, using open-source tools such as R, Python, and Spark, data visualisation tools like Power BI, and Databricks data platform. Key Responsibilities: Develop and implement analytics strategies that provide actionable insights for our business and clients. Apply the scientific method to create robust, reproducible solutions Collaborate with stakeholders … seamlessly with team members and external clients. Proficiency in R or Python. Solid understanding of SQL; experience working with Spark (Java, Python, or Scala variants) and cloud platforms like Databricks is a plus. Strong statistical knowledge, including hypothesis testing, confidence intervals, and A/B testing. Ability to understand and communicate the commercial impact of data activities. Why Join Us More ❯
Data Engineer (Databricks) - Leeds Our client is a global innovator and world leader with a highly recognizable name within technology. They are looking for Data Engineers with significant Databricks experience to join an exceptional Agile engineering team. We are seeking a Data Engineer with strong Python, PySpark, and SQL experience, a clear understanding of Databricks, and a passion for Data More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Parking Network BV
models, exploring customer behaviours, and supporting personalisation strategies - with opportunities to work on NLP projects too. You'll also take ownership of projects, support our data science tooling (including Databricks and AWS), and collaborate closely with experts in Data Engineering, BI, Analytics, and Data Governance to solve problems and create scalable solutions that make a tangible difference. What's in … and continuously develop your skills in a collaborative, hybrid working environment. About you Role Responsibilities: Design, build, and maintain scalable machine learning pipelines using Python and PySpark. Work within Databricks to develop, schedule, and monitor data workflows, utilising Databricks Asset Bundles. Collaborate with data analysts, engineers, and other scientists to deliver clean, reliable, and well-documented datasets. Develop and maintain … skills with a problem-solving mindset. Strong analytical and communication skills, with the ability to tailor complex insights for both technical and non-technical audiences. Hands-on experience with Databricks for deploying, monitoring, and maintaining machine learning pipelines. Experience working with AWS data services and architectures. Good understanding of code versioning and CI/CD tools and practices. Familiarity with More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
MAG (Airports Group)
models, exploring customer behaviours, and supporting personalisation strategies - with opportunities to work on NLP projects too. You'll also take ownership of projects, support our data science tooling (including Databricks and AWS), and collaborate closely with experts in Data Engineering, BI, Analytics, and Data Governance to solve problems and create scalable solutions that make a tangible difference. What's in … and continuously develop your skills in a collaborative, hybrid working environment. About you Role Responsibilities: Design, build, and maintain scalable machine learning pipelines using Python and PySpark. Work within Databricks to develop, schedule, and monitor data workflows, utilising Databricks Asset Bundles. Collaborate with data analysts, engineers, and other scientists to deliver clean, reliable, and well-documented datasets. Develop and maintain … skills with a problem-solving mindset. Strong analytical and communication skills, with the ability to tailor complex insights for both technical and non-technical audiences. Hands-on experience with Databricks for deploying, monitoring, and maintaining machine learning pipelines. Experience working with AWS data services and architectures. Good understanding of code versioning and CI/CD tools and practices. Familiarity with More ❯
login/join with: We are looking for an experienced Data Governance Lead to drive enterprise-wide data governance initiatives with a focus on modern cloud data platforms, specifically Databricks and Microsoft Azure. This role will define and enforce data policies, standards, and best practices that ensure high data quality, security, and regulatory compliance across our data ecosystem. You will … implement enterprise-level data governance frameworks, standards, and operating models, aligned with business objectives and compliance requirements. Collaborate with data engineering and platform teams to integrate data governance within Databricks and Azure environments, including Data Lake, Synapse, and Purview. Define and enforce data classification, access management, and lineage tracking using Azure-native and third-party tools. Lead the Data Governance … pipelines and lakehouses. Champion the use of metadata management, data catalogs (e.g., Azure Purview, Unity Catalog), and standardized business glossaries. Provide governance oversight for data sharing and consumption in Databricks notebooks, Power BI reports, and machine learning workflows. Drive awareness and adoption of governance policies through training, documentation, and data literacy programs. Ensure data governance supports regulatory compliance, (e.g. GDPR More ❯
Engineering, Data Science, Analytics, and DevOps teams to align operational strategies with technical and business requirements. Optimize operational performance and cost management for services including Azure Data Factory, Azure Databricks, Delta Lake, and Azure Data Lake Storage. Serve as the domain expert in DataOps by providing strategic guidance, mentoring colleagues, and driving continuous process improvements. What you will need Demonstrable … with agile teams and driving automation of data workflows within the Microsoft Azure ecosystem. Hands-on expertise with Azure Data Platform with components such as Azure Data Factory, Azure Databricks, Azure Data Lake Storage, Delta Lake, Azure SQL, Purview and APIM. Proficiency in developing CI/CD data pipelines and strong programming skills in Python, SQL, Bash, and PySpark for More ❯
data integration processes to bring in data from on-premises , cloud , and external APIs into Azure Data Lake Storage (ADLS). Data Transformation & Modeling : Utilize Azure data Factory/Databricks (PySpark/Scala) to build scalable data processing and transformation workflows for both batch and streaming data. Develop data models and implement data partitioning , indexing , and schema optimization to improve … accuracy, completeness, and integrity across the Azure environment. Develop and enforce data quality checks and validation rules to maintain high levels of data consistency using Azure Data Factory and Databricks . Leads the delivery of complex technical solutions, provides technical advice and guidance to other software developers and takes responsibility for ensuring that all code meets Group standards, guidelines and More ❯
and version management of large numbers of data science models (Azure DevOps). You'll support the implementation of Machine Learning Ops on cloud (Azure & Azure ML. Experience with Databricks is advantageous.) You'll protect against model degradation and operational performance issues through the development and continual automated monitoring of model execution and model quality. You'll manage automatic model … and integration Basic understanding of networking concepts within Azure Familiarity with Docker and Kubernetes is advantageous Experience within financial/insurance services industry is advantageous Experience with AzureML and Databricks is advantageous Skills & Qualifications Strong understanding of Microsoft Azure, (Azure ML, Azure Stream Analytics, Cognitive services, Event Hubs, Synapse, and Data Factory) Fluency in common data science coding capabilities such More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
First Central Services
and version management of large numbers of data science models (Azure DevOps). You'll support the implementation of Machine Learning Ops on cloud (Azure & Azure ML. Experience with Databricks is advantageous.) You'll protect against model degradation and operational performance issues through the development and continual automated monitoring of model execution and model quality. You'll manage automatic model … and integration Basic understanding of networking concepts within Azure Familiarity with Docker and Kubernetes is advantageous Experience within financial/insurance services industry is advantageous Experience with AzureML and Databricks is advantageous Skills & Qualifications Strong understanding of Microsoft Azure, (Azure ML, Azure Stream Analytics, Cognitive services, Event Hubs, Synapse, and Data Factory) Fluency in common data science coding capabilities such More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
First Central Services
a pro, with automation, templates, and CI/CD pipelines all part of your toolkit. Azure Know-How - you've got solid experience across Azure services - Synapse, Data Factory, Databricks, DevOps, networking - and you know how to keep things secure, efficient, and cost-effective. Great Communicator & Team Collaborator - whether you're talking to data engineers, security, or stakeholders - you can … Git Proven to be an excellent communicator with a clear passion for data & analytics. Deep engineering and database skills in SQL, Server Azure Synapse, Data factory, spark compute and Databricks technologies. Experience of developing coding standards and deliver methods that others follow. Experience of testing techniques, tools and approaches. Extensive experience of the full data lifecycle. Will have been involved More ❯
Data Engineer Join a Trailblazing Team in Clinical Research Data ManagementStep into a role that places you at the cutting edge of clinical trials, leveraging Electronic Health Record (EHR) data to revolutionise patient care and research. Based in the innovative More ❯
Data Engineer- Outside Ir35 - £500 daily rate- Manchester - 6 Month Role Overview: We're seeking an experienced Contract Data Engineer with a strong background in Databricks and cloud data platforms to support a high-impact data transformation programme. This hybrid role requires 2 days per week onsite in our Manchester office (non-negotiable) , with the remainder remote. You'll join … paced, delivery-focused data team responsible for building and optimising scalable, production-grade data pipelines and infrastructure. Key Responsibilities: Design and implement robust, scalable ETL/ELT pipelines using Databricks and Apache Spark Ingest, transform, and manage large volumes of data from diverse sources Collaborate with analysts, data scientists, and business stakeholders to deliver clean, accessible datasets Ensure high performance … best practices Work with cloud-native tools and services (preferably Azure ) Required Skills & Experience: Proven experience as a Data Engineer on cloud-based projects Strong hands-on skills with Databricks , Apache Spark , and Python or Scala Proficient in SQL and working with large-scale data environments Experience with Delta Lake , Azure Data Lake , or similar technologies Familiarity with version control More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
IO Associates
Data Engineer- Outside Ir35 - £500 daily rate- Manchester - 6 Month Role Overview: We're seeking an experienced Contract Data Engineer with a strong background in Databricks and cloud data platforms to support a high-impact data transformation programme. This hybrid role requires 2 days per week onsite in our Manchester office (non-negotiable) , with the remainder remote. You'll join … paced, delivery-focused data team responsible for building and optimising scalable, production-grade data pipelines and infrastructure. Key Responsibilities: Design and implement robust, scalable ETL/ELT pipelines using Databricks and Apache Spark Ingest, transform, and manage large volumes of data from diverse sources Collaborate with analysts, data scientists, and business stakeholders to deliver clean, accessible datasets Ensure high performance … best practices Work with cloud-native tools and services (preferably Azure ) Required Skills & Experience: Proven experience as a Data Engineer on cloud-based projects Strong hands-on skills with Databricks , Apache Spark , and Python or Scala Proficient in SQL and working with large-scale data environments Experience with Delta Lake , Azure Data Lake , or similar technologies Familiarity with version control More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Fdo Consulting Limited
Lead Data Engineer, £ 70000 - 80000 + benefits. SQL, ETL, Data Warehousing, Databricks etc. Home Based with one day a month at the office in Nottingham. Expanding SaaS product company are looking for a Lead Data Engineer as they continue to grow. In this hands-on role you will be part of the team responsible for designing, creating, deploying and managing More ❯
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Unolabs
Job Title: Platform Engineer Databricks Modernisation (DPS to CDP Migration) Role: 6 Months Rolling Contract Job Type: Hybrid/Remote Project Overview: The current system runs on a legacy Private Virtual Cloud (PVC) Databricks deployment, which is being phased out by late 2025. This project involves migrating pipelines and platform infrastructure running on PVC to the Databricks Enterprise Edition (E2 … to ensure smooth migration of pipelines. Key Responsibilities: Infrastructure Migration & Automation Split and migrate codebase into sub-repos tailored for CDP deployment. Refactor Terraform modules to deploy infrastructure for Databricks clusters and supporting services on AWS EC2-based Databricks E2. Manage infrastructure-as-code to provision resources such as AWS IAM roles, S3 bucket permissions, and Jenkins agents. Ensure the … mitigations. Modify and maintain Jenkins pipelines to deploy to both environments, ensuring consistent test coverage across shared and core repos. Dockerization & Service Management Build and publish Docker images for: Databricks compute environments. Supporting microservices such as Broker, Scheduler, and Status Monitor. Push and manage Docker images in AWS ECR (Elastic Container Registry) and integrate them with GitLab CI/CD More ❯
Leeds, West Yorkshire, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
native data solutions? We're recruiting on behalf of a dynamic organisation in Leeds that's investing in its data platform - and they're looking for someone with strong Databricks expertise to join their team. About the role: Designing and developing robust data pipelines using Azure Data Factory, Databricks, and Synapse Analytics. Working with Delta Lake and Azure Data Lake … Azure-based architecture Ensuring best practices in data governance, security, and performance tuning Requirements: Proven experience with Azure Data Services (ADF, Synapse, Data Lake) Strong hands-on experience with Databricks (including PySpark or SQL) Solid SQL skills and understanding of data modelling and ETL/ELT processes Familiarity with Delta Lake and lakehouse architecture A proactive, collaborative approach to problem … innovation Benefits: Competitive salary up to £65,000 Flexible hybrid working (2 days in the Leeds office) Generous holiday allowance and pension scheme Ongoing training and certification support (Azure, Databricks, etc.) A supportive, inclusive team culture with real opportunities for growth Please Note: This is a permanent role for UK residents only. This role does not offer Sponsorship. You must More ❯
Azure Data Platform Engineer, you will be responsible for:- Designing, building, and maintaining scalable data solutions on Microsoft Azure. Designing and implementing scalable data pipelines using Azure Data Factory, Databricks, Synapse Analytics, and other Azure services. Leading technical workstreams and supporting project delivery. Acting as the subject-matter expert for cloud-based data engineering. Ensure data governance, security, and compliance … maintain best practices. If you possess a combination of some of the following skills, then LETS TALK! Proven experience with Azure data services (SQL, Data Factory, Data Lake, Synapse, Databricks, Azure SQL and Cosmos DB). Strong proficiency in designing and operating scalable data solutions and pipelines. Platform engineering - Terraform Familiar with Cloud security, performance optimisation and monitoring tools (Azure More ❯
Leeds, Yorkshire, United Kingdom Hybrid / WFH Options
PEXA Group Limited
customer-facing reports. You will optimise the transformation pipeline from start to finish, guaranteeing that datasets are robust, tested, secure, and business-ready. Our data platform is built using Databricks, with data pipelines written in PySpark and orchestrated using Airflow. You will be expected to challenge and improve current transformations, ensuring they meet our performance, scalability, and data governance needs. … end-to-end data quality, from raw ingested data to business-ready datasets Optimise PySpark-based data transformation logic for performance and reliability Build scalable and maintainable pipelines in Databricks and Airflow Implement and uphold GDPR-compliant processes around PII data Collaborate with stakeholders to define what "business-ready" means, and confidently sign off datasets as fit for consumption Put … Help shape our approach to reliable data delivery for internal and external customers Skills & Experience Required Extensive hands-on experience with PySpark, including performance optimisation Deep working knowledge of Databricks (development, architecture, and operations) Proven experience working with Airflow for orchestration Proven track record in managing and securing PII data, with GDPR compliance in mind Experience in data governance processes More ❯
in our data science practices. This is a fantastic opportunity for a curious, solutions-focused data scientist to help build out our capability, working with cutting-edge tools like Databricks, AWS data services, PySpark, and CI/CD pipelines. What's in it for you? You'll be joining a collaborative, supportive team with a real passion for data-led … classification, regression, forecasting, and/or NLP Analytical mindset with the ability to present insights to both technical and non-technical audiences Experience deploying and maintaining ML pipelines within Databricks Comfortable working with AWS data services and modern data architectures Experience with CI/CD pipelines and code versioning best practices Preferred skills: Familiarity with Databricks Asset Bundles (DAB) for More ❯
in our data science practices. This is a fantastic opportunity for a curious, solutions-focused data scientist to help build out our capability, working with cutting-edge tools like Databricks, AWS data services, PySpark, and CI/CD pipelines. What's in it for you? You'll be joining a collaborative, supportive team with a real passion for data-led … classification, regression, forecasting, and/or NLP Analytical mindset with the ability to present insights to both technical and non-technical audiences Experience deploying and maintaining ML pipelines within Databricks Comfortable working with AWS data services and modern data architectures Experience with CI/CD pipelines and code versioning best practices Preferred skills: Familiarity with Databricks Asset Bundles (DAB) for More ❯
technical background but has recently worked in a managerial role focused on mentoring, coaching, reviewing code, and standard setting. The role will focus on the development of the clients Databricks platform (AWS is preferred but open to Azure.GCP experience also), utilising Python and SQL, contribute to CI/CD pipelines, strategy development, cost optimisation and data governance frameworks. Job duties … engineers, helping to mentor and coach the team. Manage the adoption of automated CI/CD pipelines. Implement a new delivery roadmap. Contribute to the development of a new Databricks system in AWS (AWS experience is preferred but they are open to managers with Azure experience). Cost optimisation. Establish data governance frameworks for secure handling of delivery information. Requirements … Manager: 6+ years experience in a hands on data engineer role, with over a years recent experience in a managerial role, coaching similar sized teams. Deep knowledge of the Databricks platform. Hands on Python development experience. SQL optimisation. Experience with large scale data pipeline optimisation. Experience with Streaming and Batch Spark workloads. Strong people management skills. Role: Data Engineering Manager More ❯
actionable insights. Develop and maintain robust Power BI dashboards and reports, ensuring accuracy and usability for clinical and operational teams. Design and build data pipelines and transformation logic using Databricks and SQL Mesh. Work with stakeholders across the Trust to gather requirements, interpret business needs, and deliver tailored BI solutions. Successful candidates must have experience with SQL Server, PowerBI, DatabricksMore ❯