Databricks, Azure Data Lake Storage, Delta Lake, Azure SQL, Purview and APIM. Proficiency in developing CI/CD data pipelines and strong programming skills in Python, SQL, Bash, and PySpark for automation. Strong aptitude for data pipeline monitoring and an understanding of data security practices such as RBAC and encryption. Implemented data and pipeline observability dashboards, ensuring high data More ❯
and knowledge sharing. Mission Critical Skills Proven experience as a Data Engineer, Data Platform Engineer or similar technical role in enterprise environments, with advanced SQL development skills, Python/PySpark, and demonstrated experience with modern ETL/ELT frameworks Hands-on expertise with cloud-native data platforms (preferably Microsoft Fabric, Synapse, Data Factory, and related services Experience integrating data More ❯
Rochdale, Greater Manchester, North West, United Kingdom Hybrid / WFH Options
Footasylum Ltd
knowledge sharing sessions and self-development. About You Experience with finance/financial systems and concepts Azure Databricks Azure Data Factory Excellent SQL skills Good Python/Spark/pyspark skills Experience of Kimball Methodology and star schemas (dimensional model). Experience of working with enterprise data warehouse solutions. Experience of working with structured and unstructured data Experience of More ❯
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Forward Role
financial datasets Python experience, particularly for data processing and ETL workflows Hands-on experience with cloud platforms- Azure Experience designing and maintaining data pipelines using tools like Databricks and PySpark Knowledge of data warehousing solutions - Snowflake experience would be brilliant Understanding of CI/CD processes for deploying data solutions Some exposure to big data technologies and distributed processing More ❯
Azure. Especially Synapse, ADF and Power BI (Datasets and Reports). Ideally SSIS, SSRS, SSAS with some understanding of Power App design and delivery Expert in SQL and Python (PySpark) languages, any other object orientated language skills would be a benefit Expert in data modelling and data architecture concepts Experience of setup and management of code management & deployment tools More ❯
Leeds, West Yorkshire, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
architecture Ensuring best practices in data governance, security, and performance tuning Requirements: Proven experience with Azure Data Services (ADF, Synapse, Data Lake) Strong hands-on experience with Databricks (including PySpark or SQL) Solid SQL skills and understanding of data modelling and ETL/ELT processes Familiarity with Delta Lake and lakehouse architecture A proactive, collaborative approach to problem-solving More ❯
name within technology. They are looking for Data Engineers with significant Databricks experience to join an exceptional Agile engineering team. We are seeking a Data Engineer with strong Python, PySpark, and SQL experience, a clear understanding of Databricks, and a passion for Data Science (R, Machine Learning, and AI). Database experience with SQL and NoSQL databases such as More ❯
understanding of AI/ML/DL and Statistics, as well as coding proficiency using related open source libraries and frameworks. Significant proficiency in SQL and languages like Python, PySpark and/or Scala. Can lead, work independently as well as play a key role in a team. Good communication and interpersonal skills for working in a multicultural work More ❯
Sheffield, South Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Erin Associates
SQL Server. Understanding of applying master data management principles, data quality frameworks and data governance best practices. Understanding of Azure Data Factory, Fabric and similar technologies Tech Stack Python, PySpark, SQL, Xpath, XML, Azure-based Data Science tools, BI tools, Data Visualisation, Agile. The company have an excellent reputation within their sector and have shown consistent growth year-on More ❯
Trafford Park, Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Ikhoi Recruitment
Health & Safety. About You: Here’s what we’re looking for: · Proficiency in Python and SQL for analysis, model development, and data interrogation. · Experience in handling large datasets with PySpark and managing distributed data processing. · Comfortable deploying statistical or ML models into production environments. · Strong understanding of cloud infrastructure, preferably AWS. · A methodical, problem-solving mindset with high attention More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Vermelo RPO
Knowledge of the technical differences between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW’s Radar software is preferred Proficient at communicating results in a concise More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Gerrard White
Knowledge of the technical differences between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar software is preferred Proficient at communicating results in a concise More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Vermelo RPO
ideally with a focus in Motor Experience and detailed technical knowledge of GLMs/Elastic Nets, GBMs, GAMs, Random Forests, and clustering techniques Experience in programming languages (e.g. Python, PySpark, R, SAS, SQL) Proficient at communicating results in a concise manner both verbally and written Behaviours: Motivated by technical excellence Team player Self-motivated with a drive to learn More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Gerrard White
ideally with a focus in Motor Experience and detailed technical knowledge of GLMs/Elastic Nets, GBMs, GAMs, Random Forests, and clustering techniques Experience in programming languages (e.g. Python, PySpark, R, SAS, SQL) Proficient at communicating results in a concise manner both verbally and written Behaviours: Motivated by technical excellence Team player Self-motivated with a drive to learn More ❯
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our … modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No … Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data More ❯
Data Engineer (Databricks) - Leeds (Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Data Engineer) Our client is a global innovator and world leader with one of the most recognisable names within technology. They are looking for … Data Engineers with significant Databricks experience to join an exceptional Agile engineering team. We are seeking a Data Engineer with strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No-SQL - Aurora, MS SQL Server, MySQL is … top performers. Location: Leeds Salary: £40k - £50k + Pension + Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Data Engineer More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
operations, and commercial teams — all while growing your skills in a modern, cloud-first environment. Why This Role Stands Out: You’ll build scalable, production-grade data pipelines using PySpark , SQL , and Databricks Your work will directly power analytics, BI, and data science across the business You’ll be part of a collaborative, forward-thinking data team that values … engineering You’ll get space to grow, learn new tools, and make a real impact What You’ll Bring: 2–5 years’ experience in data engineering or similar Strong PySpark and advanced SQL skills Hands-on experience with Databricks and building ETL/ELT pipelines Familiarity with CI/CD and version control Bonus Points For: Experience with Databricks More ❯
build, and optimize cloud-native data ecosystems that power analytics, governance, and business intelligence across the firm. You'll work with cutting-edge technologies like Azure Synapse, Databricks, Python, PySpark, and Collibra to deliver secure, high-performance solutions. What You'll Do Engineer and optimize modern data platforms using Azure and Databricks Build robust data pipelines with Python and … PySpark Automate deployments and infrastructure with DevOps and CI/CD Implement access controls and data protection policies Integrate governance tools like Collibra for metadata and lineage tracking Collaborate with cross-functional teams to enable data-driven decision-making What You'll Bring Proven experience in cloud-based data engineering (Azure, Databricks) Strong Python/PySpark skills and More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Parking Network BV
commercial, and operations - and this role will have a big say in what we build next. You'll be responsible for designing and building robust, scalable data pipelines using PySpark, SQL and Databricks - enabling our analytics, BI and data science colleagues to unlock real value across the business. This is a brilliant opportunity for someone who's passionate about … your expertise further - especially with tools like Databricks. Here's what will help you thrive in this role: 2-5 years in data engineering or a related field Strong PySpark and advanced SQL skills Practical experience building and maintaining ETL/ELT pipelines in Databricks Familiarity with CI/CD pipelines and version control practices Nice to have: Experience More ❯
Azure Data Engineer - 1/2 days onsite Summary: Join a team building a modern Azure-based data platform. This hands-on engineering role involves designing and developing scalable, automated data pipelines using tools like Data Factory, Databricks, Synapse, and More ❯
Company: The Citation Group Position: Machine Learning Engineer Location: Wilmslow/Remote Type: Full-time – Permanent We’re looking for a Machine Learning Engineer to help us deploy, maintain, and continuously improve our machine learning models in production. As part More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Parking Network BV
For airports, for partners, for people. We are CAVU. At CAVU our purpose is to find new and better ways to make airport travel seamless and enjoyable for everybody. From the smallest ideas to the biggest changes. Every day here More ❯
practices. This is a fantastic opportunity for a curious, solutions-focused data scientist to help build out our capability, working with cutting-edge tools like Databricks, AWS data services, PySpark, and CI/CD pipelines. What's in it for you? You'll be joining a collaborative, supportive team with a real passion for data-led innovation. It's … business impact - we'd love to hear from you. About you: 2-5 years of experience in Data Science or a related field Strong programming skills in Python and PySpark Strong data science modelling skills across classification, regression, forecasting, and/or NLP Analytical mindset with the ability to present insights to both technical and non-technical audiences Experience More ❯
practices. This is a fantastic opportunity for a curious, solutions-focused data scientist to help build out our capability, working with cutting-edge tools like Databricks, AWS data services, PySpark, and CI/CD pipelines. What's in it for you? You'll be joining a collaborative, supportive team with a real passion for data-led innovation. It's … we can reach new heights. Together, we are CAVU. About You: 2-5 years of experience in Data Science or a related field Strong programming skills in Python and PySpark Strong data science modelling skills across classification, regression, forecasting, and/or NLP Analytical mindset with the ability to present insights to both technical and non-technical audiences Experience More ❯
Leeds, Yorkshire, United Kingdom Hybrid / WFH Options
PEXA Group Limited
the transformation pipeline from start to finish, guaranteeing that datasets are robust, tested, secure, and business-ready. Our data platform is built using Databricks, with data pipelines written in PySpark and orchestrated using Airflow. You will be expected to challenge and improve current transformations, ensuring they meet our performance, scalability, and data governance needs. This includes work with complex … days per year for meaningful collaboration in either Leeds or Thame. Key Responsibilities Ensure end-to-end data quality, from raw ingested data to business-ready datasets Optimise PySpark-based data transformation logic for performance and reliability Build scalable and maintainable pipelines in Databricks and Airflow Implement and uphold GDPR-compliant processes around PII data Collaborate with stakeholders to … management, metadata management, and wider data governance practices Help shape our approach to reliable data delivery for internal and external customers Skills & Experience Required Extensive hands-on experience with PySpark, including performance optimisation Deep working knowledge of Databricks (development, architecture, and operations) Proven experience working with Airflow for orchestration Proven track record in managing and securing PII data, with More ❯