bolton, greater manchester, north west england, united kingdom Hybrid / WFH Options
Lorien
Engineer Location: Manchester (hybrid) Type: Permanent Must have skills Working with AWS cloud services to manage and process data. Using AWS Glue for building and running data pipelines. Writing PySpark code in Python, along with SQL, to handle big data processing. Creating and running SQL queries to analyze and manage data More ❯
warrington, cheshire, north west england, united kingdom Hybrid / WFH Options
Lorien
Engineer Location: Manchester (hybrid) Type: Permanent Must have skills Working with AWS cloud services to manage and process data. Using AWS Glue for building and running data pipelines. Writing PySpark code in Python, along with SQL, to handle big data processing. Creating and running SQL queries to analyze and manage data More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Sanderson Recruitment
Team Leading experience - REQUIRED/Demonstrable on CV (Full Support from Engineering Manager is also available) Hands on development/engineering background Machine Learning or Data Background Technical Experience: PySpark, Python, SQL, Jupiter Cloud: AWS, Azure (Cloud Environment) - Moving towards Azure Nice to Have: Astro/Airflow, Notebook Reasonable Adjustments: Respect and equality are core values to us. We More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
Team Leading experience - REQUIRED/Demonstrable on CV (Full Support from Engineering Manager is also available) Hands on development/engineering background Machine Learning or Data Background Technical Experience: PySpark, Python, SQL, Jupiter Cloud: AWS, Azure (Cloud Environment) - Moving towards Azure Nice to Have: Astro/Airflow, Notebook Reasonable Adjustments: Respect and equality are core values to us. We More ❯
analytics efforts. Required Skills & Experience: 4–5 years of commercial experience in data science , preferably in eCommerce or marketing analytics. Strong hands-on experience with Databricks, SQL, Python, and PySpark ; knowledge of R and dashboarding tools is a plus. Proven experience with causal inference, MMM modelling, and experimentation . Strong analytical and problem-solving skills with the ability to More ❯
analytics efforts. Required Skills & Experience: 4–5 years of commercial experience in data science , preferably in eCommerce or marketing analytics. Strong hands-on experience with Databricks, SQL, Python, and PySpark ; knowledge of R and dashboarding tools is a plus. Proven experience with causal inference, MMM modelling, and experimentation . Strong analytical and problem-solving skills with the ability to More ❯
analytics efforts. Required Skills & Experience: 4–5 years of commercial experience in data science , preferably in eCommerce or marketing analytics. Strong hands-on experience with Databricks, SQL, Python, and PySpark ; knowledge of R and dashboarding tools is a plus. Proven experience with causal inference, MMM modelling, and experimentation . Strong analytical and problem-solving skills with the ability to More ❯
analytics efforts. Required Skills & Experience: 4–5 years of commercial experience in data science , preferably in eCommerce or marketing analytics. Strong hands-on experience with Databricks, SQL, Python, and PySpark ; knowledge of R and dashboarding tools is a plus. Proven experience with causal inference, MMM modelling, and experimentation . Strong analytical and problem-solving skills with the ability to More ❯
profiling, ingestion, collation and storage of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySql). Quality engineering professionals utilise Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a Data Engineer, you will: Digest More ❯
profiling, ingestion, collation and storage of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySql). Quality engineering professionals utilise Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a Data Engineer, you will: Digest More ❯
profiling, ingestion, collation and storage of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySql). Quality engineering professionals utilise Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a Data Engineering Manager, you will More ❯
on the principles of Event Driven Micro Services Architecture. Required skills Python AWS - SNS/SQS Lambda Step Functions ECS Spinnaker Kubernetes Kafka Terraform ORM frameworks Nice to have Pyspark and Databricks experience is a plus. Knowledge and experience in the JPM morgan ecosystem/tools will carry higher value. More ❯
algorithms, statistics, and probability theory. You demonstrate strong problem-solving skills, a quick learning ability, and enthusiasm for tackling complex challenges. You are proficient in Python, with experience using PySpark and ML libraries such as scikit-learn, TensorFlow, or Keras . You are familiar with big data technologies (e.g., Hadoop, Spark), cloud platforms (AWS, GCP), and can effectively communicate More ❯
Erskine, Renfrewshire, Scotland, United Kingdom Hybrid / WFH Options
DXC Technology
deploy models using tools like TensorFlow Serving, TorchServe, ONNX, and TensorRT. Build and manage ML pipelines using MLflow, Kubeflow, and Azure ML Pipelines. Work with large-scale data using PySpark and integrate models into production environments. Monitor model performance and retrain as needed to ensure accuracy and efficiency. Collaborate with cross-functional teams to integrate AI solutions into scalable More ❯
that resonate You're motivated, adaptable, and love working in empowered, fast-moving environments It would be great if you had: Experience in the energy retail sector Familiarity with PySpark Background in pricing or commercial modelling Here's what else you need to know: This role is open exclusively to internal applicants from E.ON UK and E.ON Next Role More ❯
london, south east england, united kingdom Hybrid / WFH Options
myGwork - LGBTQ+ Business Community
context. Proven success in leading analysts, including development, prioritisation, and delivery. Deep comfort with tools like SQL, Tableau, and large-scale data platforms (e.g., Databricks); bonus for Python or PySpark skills. Strong grasp of A/B testing, experimentation design, and statistical rigour. Exceptional communicator - able to distil complex data into clear, actionable narratives for senior audiences. Strategic thinker More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
london (city of london), south east england, united kingdom
Mastek
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
Contract duration: 6 months (can be extended) Location: London Must have skills: Primary Skills - SAS Admin, Enterprise Guide, Basic SAS coding skill Secondary Skills - Unix Scripting, Gitlab, YAML, Autosys, pyspark, Snowflake, AWS, Agile practice, SQL Candidate should have strong experience in SAS Administration and Expert SAS Coding Skills. Should have more than 6 years experience. Should have very good More ❯
PySpark + Fabric Developer (Contract) | London | Office-Based Location: London (Office-based) Contract: 6 months (potential extension) Start: ASAP Rate: Market rate - Inside IR35 We’re looking for experienced PySpark + Fabric Developers to join a major transformation programme with a leading global financial data and infrastructure organisation. This is an exciting opportunity to work on cutting-edge … enhance throughput. Collaborate with analysts and stakeholders to translate business needs into technical solutions. Maintain clear documentation and contribute to internal best practices. Requirements Strong hands-on experience with PySpark (RDDs, DataFrames, Spark SQL). Proven ability to build and optimise ETL pipelines and dataflows. Familiar with Microsoft Fabric or similar lakehouse/data platform environments. Experience with Git More ❯
for Technical Data Architect location: Central London Type : Permanent Hybrid role (2-3 days from client location) We are seeking a highly skilled TechnicalData Architect- with expertise in Databricks, PySpark, and modern data engineering practices. The ideal candidate will lead the design, development, and optimization of scalable data pipelines, while ensuring data accuracy, consistency, and performance across the enterprise … cross-functional teams. ________________________________________ Key Responsibilities Lead the design, development, and maintenance of scalable, high-performance data pipelines on Databricks. Architect and implement data ingestion, transformation, and integration workflows using PySpark, SQL, and Delta Lake. Guide the team in migrating legacy ETL processes to modern cloud-based data pipelines. Ensure data accuracy, schema consistency, row counts, and KPIs during migration … cloud platforms, and analytics. ________________________________________ Required Skills & Qualifications 10-12 years of experience in data engineering, with at least 3+ years in a technical lead role. Strong expertise in Databricks , PySpark , and Delta Lake . DBT Advanced proficiency in SQL, ETL/ELT pipelines, and data modelling. Experience with Azure Data Services (ADLS, ADF, Synapse) or other major cloud platforms More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Asset Resourcing Limited
Data QA Engineer – Remote-first - £55-65,000 Overview: As a Data QA Engineer, you will ensure the reliability, accuracy and performance of our client’s data solutions. Operating remotely, you will work closely with Data Engineers, Architects and Analysts More ❯