London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
for a hands-on Senior Data Engineer who thrives in technically complex environments and enjoys solving large-scale data pipeline challenges. You'll work with tools like AWS Glue, PySpark, Iceberg, Databricks, and Snowflake , collaborating with data scientists and stakeholders across multiple business units. Key Responsibilities: Design, build, and maintain scalable data pipelines and architectures. Implement secure and efficient … initiatives. Act as a subject matter expert, guiding technical direction and mentoring junior engineers. What We're Looking For: Strong hands-on experience with AWS data engineering tools: Glue, PySpark, Athena, Iceberg, Lake Formation , etc. Proficiency in Python and SQL for data processing and analysis. Deep understanding of data governance, quality, and security best practices. Experience working with market More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
for a hands-on Senior Data Engineer who thrives in technically complex environments and enjoys solving large-scale data pipeline challenges. You'll work with tools like AWS Glue, PySpark, Iceberg, Databricks, and Snowflake , collaborating with data scientists and stakeholders across multiple business units. Key Responsibilities: Design, build, and maintain scalable data pipelines and architectures. Implement secure and efficient … initiatives. Act as a subject matter expert, guiding technical direction and mentoring junior engineers. What We're Looking For: Strong hands-on experience with AWS data engineering tools: Glue, PySpark, Athena, Iceberg, Lake Formation , etc. Proficiency in Python and SQL for data processing and analysis. Deep understanding of data governance, quality, and security best practices. Experience working with market More ❯
years of software development, or 2+ years of technical support experience - Experience troubleshooting and debugging technical systems - Experience in Unix - Experience scripting in modern program languages - Knowledge of Python, PySpark, Big Data and SQL Queries PREFERRED QUALIFICATIONS - Knowledge of web services, distributed systems, and web application development - Experience with REST web services, XML, JSON Our inclusive culture empowers Amazonians More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Gerrard White
ideally with a focus in Motor Experience and detailed technical knowledge of GLMs/Elastic Nets, GBMs, GAMs, Random Forests, and clustering techniques Experience in programming languages (e.g. Python, PySpark, R, SAS, SQL) Proficient at communicating results in a concise manner both verbally and written Behaviours: Motivated by technical excellence Team player Self-motivated with a drive to learn More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala (minimum of 2). Extensive Data Engineering hands-on experience (coding, configuration, automation, delivery, monitoring, security). ETL Tools such as Azure Data Fabric (ADF) and Databricks or … UK, and you MUST have the Right to Work in the UK long-term without the need for Company Sponsorship. KEYWORDS Senior Data Engineer, Coding Skills, Spark, Java, Python, PySpark, Scala, ETL Tools, Azure Data Fabric (ADF), Databricks, HDFS, Hadoop, Big Data, Cloudera, Data Lakes, Azure Data, Delta Lake, Data Lake, Databricks Lakehouse, Data Analytics, SQL, Geospatial Data, FME More ❯
Data Engineer Manager Department: Tech Hub Employment Type: Permanent - Full Time Location: London Description Contract type: Permanent, full-time Hours: 37.5 Salary: circa £78,000 depending on experience Location: London WFH policy: Employees are required to attend the office 2 More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant AWS or Azure … the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, Delta Lake, Data Lake Please note that due to a high level More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2) Extensive Data Engineering and Data Analytics hands-on experience Significant AWS hands-on experience Technical Delivery Manager skills Geospatial Data experience (including QGIS … support your well-being and career growth. KEYWORDS Principal Geospatial Data Engineer, Geospatial, GIS, QGIS, FME, AWS, On-Prem Services, Software Engineering, Data Engineering, Data Analytics, Spark, Java, Python, PySpark, Scala, ETL Tools, AWS Glue. Please note, to be considered for this role you MUST reside/live in the UK, and you MUST have the Right to Work More ❯
Employment Type: Temporary
Salary: £80000 - £500000/annum Pension, Good Holiday, Insurances
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our … modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No … Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data More ❯
Server Data Warehouse to AWS, and helping to deliver scalable, real-time data solutions across the business.This role offers the chance to work with cutting-edge tools such as PySpark and Iceberg while collaborating with analysts, data scientists, and wider tech teams to drive automation, enable BI, and support advanced analytics. It's a role that balances hands-on … date with emerging data technologies and apply them where relevant The Skill Requirements: Hands-on experience with AWS services (Glue, Lambda, S3, Redshift, EMR) Strong skills in Python, SQL, PySpark and pipeline orchestration Proven understanding of data warehousing and data lakehouse concepts Excellent problem-solving skills with the ability to resolve performance bottlenecks Clear communicator, comfortable working with technical More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
operations, and commercial teams — all while growing your skills in a modern, cloud-first environment. Why This Role Stands Out: You’ll build scalable, production-grade data pipelines using PySpark , SQL , and Databricks Your work will directly power analytics, BI, and data science across the business You’ll be part of a collaborative, forward-thinking data team that values … engineering You’ll get space to grow, learn new tools, and make a real impact What You’ll Bring: 2–5 years’ experience in data engineering or similar Strong PySpark and advanced SQL skills Hands-on experience with Databricks and building ETL/ELT pipelines Familiarity with CI/CD and version control Bonus Points For: Experience with Databricks More ❯
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
and secure architectures on Databricks and distributed systems. Anticipate scaling challenges and ensure platforms are future-proof. Lead the design and development of robust, high-performance data pipelines using PySpark and Databricks. Define and ensure testing frameworks for data workflows. Ensure end-to-end data quality from raw ingestion to curated, trusted datasets powering analytics. Establish and enforce best … they add value to the ecosystem. Skills & Experience Required Broad experience as a Data Engineer including technical leadership Broad cloud experience, ideally both Azure and AWS Deep expertise in PySpark and distributed data processing at scale. Extensive experience designing and optimising in Databricks. Advanced SQL optimisation and schema design for analytical workloads. Strong understanding of data security, privacy, and More ❯
alerting systems to maintain data health and accuracy Define KPIs and thresholds in collaboration with technical and non-technical stakeholders Develop and productionise machine learning and statistical models (Python, PySpark) Deploy monitoring solutions on AWS infrastructure Create scalable frameworks for future monitoring needs Investigate anomalies and ensure quick resolution of issues in the data pipeline Advocate for data quality … best practices across the business Provide mentorship and contribute to a culture of continuous improvement About You: Proficient in Python and SQL Experience working with large datasets, preferably using PySpark Solid understanding of AWS or similar cloud infrastructure Methodical, detail-oriented, and comfortable working independently Able to translate business needs into technical solutions Previous experience building monitoring or data More ❯
alerting systems to maintain data health and accuracy Define KPIs and thresholds in collaboration with technical and non-technical stakeholders Develop and productionise machine learning and statistical models (Python, PySpark) Deploy monitoring solutions on AWS infrastructure Create scalable frameworks for future monitoring needs Investigate anomalies and ensure quick resolution of issues in the data pipeline Advocate for data quality … best practices across the business Provide mentorship and contribute to a culture of continuous improvement About You: Proficient in Python and SQL Experience working with large datasets, preferably using PySpark Solid understanding of AWS or similar cloud infrastructure Methodical, detail-oriented, and comfortable working independently Able to translate business needs into technical solutions Previous experience building monitoring or data More ❯
Salford, Greater Manchester, North West, United Kingdom Hybrid / WFH Options
Searchworks Ltd
and implement scalable monitoring and alerting solutions across data pipelines Define critical metrics and thresholds in collaboration with technical and non-technical teams Build and deploy models using Python, PySpark, and AWS services Troubleshoot data anomalies and promote best practices in data quality Contribute to future-proof frameworks that support growth across teams and projects Mentor junior colleagues and … support a culture of curiosity and rigor ?? What We're Looking For: Strong Python and SQL skills Experience with large datasets and distributed processing (PySpark) Familiarity with deploying models into production environments Solid understanding of cloud infrastructure (preferably AWS) A proactive, detail-oriented mindset and the ability to deliver independently Bonus: Experience building monitoring or data quality frameworks More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
Do Build & optimise recommendation/personalisation models . Drive incremental targeting beyond repeat-purchase patterns. Apply predictive analytics to customer behaviour & purchase history. Use Python (essential), SQL , and ideally PySpark to deliver insights. Collaborate with Product, Content, and Data Science to align models with business goals. Translate data into clear, actionable insights. (Bonus) Explore AI-driven ad content opportunities. … What We're Looking For Proven experience with predictive modelling/recommender systems . Strong Python & SQL skills (essential). Exposure to PySpark (desirable). Strong communicator with ability to link data to business outcomes. (Bonus) Experience with Generative AI or content automation. More ❯
Manchester Area, United Kingdom Hybrid / WFH Options
Searchability®
to £70,000 p/a plus bonus & excellent benefits Shape the future of monitoring & alerting infrastructure for a global data insights provider We’re looking for strong Python, PySpark, and AWS skills, with experience in large-scale data systems Hybrid working – 2-3 days in the office ABOUT THE CLIENT: Our client is a pioneering force in the … embedding best practices in data quality and monitoring. KEY SKILLS/EXPERIENCE: Proficient in Python & SQL for analysis, modelling, and interrogation Experience with large datasets and distributed data processing (PySpark) Skilled in deploying ML/statistical models into production Knowledge of AWS cloud infrastructure Strong analytical, problem-solving, and troubleshooting skills Ability to work with technical and non-technical … express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. KEY SKILLS: Data Scientist/Python/PySpark/AWS/SQL/Monitoring & Alerting/Data Quality/Machine Learning/Statistical Modelling More ❯
bolton, greater manchester, north west england, united kingdom Hybrid / WFH Options
Searchability®
to £70,000 p/a plus bonus & excellent benefits Shape the future of monitoring & alerting infrastructure for a global data insights provider We’re looking for strong Python, PySpark, and AWS skills, with experience in large-scale data systems Hybrid working – 2-3 days in the office ABOUT THE CLIENT: Our client is a pioneering force in the … embedding best practices in data quality and monitoring. KEY SKILLS/EXPERIENCE: Proficient in Python & SQL for analysis, modelling, and interrogation Experience with large datasets and distributed data processing (PySpark) Skilled in deploying ML/statistical models into production Knowledge of AWS cloud infrastructure Strong analytical, problem-solving, and troubleshooting skills Ability to work with technical and non-technical … express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. KEY SKILLS: Data Scientist/Python/PySpark/AWS/SQL/Monitoring & Alerting/Data Quality/Machine Learning/Statistical Modelling More ❯
warrington, cheshire, north west england, united kingdom Hybrid / WFH Options
Searchability®
to £70,000 p/a plus bonus & excellent benefits Shape the future of monitoring & alerting infrastructure for a global data insights provider We’re looking for strong Python, PySpark, and AWS skills, with experience in large-scale data systems Hybrid working – 2-3 days in the office ABOUT THE CLIENT: Our client is a pioneering force in the … embedding best practices in data quality and monitoring. KEY SKILLS/EXPERIENCE: Proficient in Python & SQL for analysis, modelling, and interrogation Experience with large datasets and distributed data processing (PySpark) Skilled in deploying ML/statistical models into production Knowledge of AWS cloud infrastructure Strong analytical, problem-solving, and troubleshooting skills Ability to work with technical and non-technical … express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. KEY SKILLS: Data Scientist/Python/PySpark/AWS/SQL/Monitoring & Alerting/Data Quality/Machine Learning/Statistical Modelling More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Searchability
to £70,000 p/a plus bonus & excellent benefits Shape the future of monitoring & alerting infrastructure for a global data insights provider We're looking for strong Python, PySpark, and AWS skills, with experience in large-scale data systems Hybrid working - 2-3 days in the office ABOUT THE CLIENT: Our client is a pioneering force in the … embedding best practices in data quality and monitoring. KEY SKILLS/EXPERIENCE: Proficient in Python & SQL for analysis, modelling, and interrogation Experience with large datasets and distributed data processing (PySpark) Skilled in deploying ML/statistical models into production Knowledge of AWS cloud infrastructure Strong analytical, problem-solving, and troubleshooting skills Ability to work with technical and non-technical … express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. KEY SKILLS: Data Scientist/Python/PySpark/AWS/SQL/Monitoring & Alerting/Data Quality/Machine Learning/Statistical Modelling/Manchester/Hybrid Working More ❯