data services – Databricks, ADF, ADLS, Power BI. Proficiency in SQL and data profiling for test design and validation. Hands-on experience with test automation frameworks such as Python/PySpark, Great Expectations, Pytest, or dbt tests. Practical understanding of CI/CD integration (Azure DevOps, GitHub Actions, or similar). Strong problem-solving skills and the ability to work More ❯
ad-hoc and complex queries to perform data analysis. Databricks experience is essential. Experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will be able to develop solutions in a hybrid data environment (on-Prem and Cloud). You must be able to collaborate seamlessly across diverse technical stacks More ❯
ad-hoc and complex queries to perform data analysis. Databricks experience is essential. Experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will be able to develop solutions in a hybrid data environment (on-Prem and Cloud). You must be able to collaborate seamlessly across diverse technical stacks More ❯
DevOps best practices. Collaborate with BAs on source-to-target mapping and build new data model components. Participate in Agile ceremonies (stand-ups, backlog refinement, etc.). Essential Skills: PySpark and SparkSQL. Strong knowledge of relational database modelling Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes). Azure platform experience. ADF or Synapse pipelines for orchestration. PythonMore ❯
DevOps best practices. Collaborate with BAs on source-to-target mapping and build new data model components. Participate in Agile ceremonies (stand-ups, backlog refinement, etc.). Essential Skills: PySpark and SparkSQL. Strong knowledge of relational database modelling Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes). Azure platform experience. ADF or Synapse pipelines for orchestration. PythonMore ❯
Key Skills: Strong SQL skills and experience with relational databases. Hands-on experience with Azure (ADF, Synapse, Data Lake) or AWS/GCP equivalents. Familiarity with scripting languages (Python, PySpark). Knowledge of data modelling and warehouse design (Kimball, Data Vault). Exposure to Power BI to support optimised data models for reporting. Agile team experience, CI/CD More ❯
Learning, Deep Learning or LLM Frameworks) Desirable Minimum 2 years experience in Data related field Minimum 2 years in a Business or Management Consulting field Experience of Docker, Hadoop, PySpark, Apache or MS Azure Minimum 2 years NHS/Healthcare experience Disclosure and Barring Service Check This post is subject to the Rehabilitation of Offenders Act (Exceptions Order More ❯
is optimized. YOUR BACKGROUND AND EXPERIENCE 5 years of commercial experience working as a Data Engineer 3 years exposure to the Azure Stack - Data bricks, Synapse, ADF Python and PySpark Airflow for Orchestration Test-Driven Development and Automated Testing ETL Development More ❯
Knutsford, Cheshire, United Kingdom Hybrid / WFH Options
Experis
front-end development (HTML, Stream-lit, Flask Familiarity with model deployment and monitoring in cloud environments (AWS). Understanding of machine learning lifecycle and data pipelines. Proficiency with Python, Pyspark, Big-data ecosystems Hands-on experience with MLOps tools (e.g., MLflow, Airflow, Docker, Kubernetes) Secondary Skills Experience with RESTful APIs and integrating backend services All profiles will be reviewed More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Salt Search
EMEA to drive productivity and efficiency. Own sales operations functions including pipeline management, incentive compensation, deal desk, lead management, and contact centre operations . Use SQL and Python (Pandas, PySpark) to analyse data, automate workflows, and generate insights. Design and manage ETL/ELT processes, data models, and reporting automation . Leverage Databricks, Snowflake, and GCP to enable scalable More ❯
business-critical programme. Key Requirements: Proven experience as a Data Engineer, within Healthcare Proficiency in Azure Data Factory, Azure Synapse, Snowflake, and SQL. Strong Python skills, including experience with PySpark and metadata-driven frameworks. Familiarity with cloud platforms (Azure preferred), pipelines, and production code. Solid understanding of relational databases and data modelling (3NF & dimensional). Strong communication skills and More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
AWS/Azure - moving towards Azure). Collaborate with stakeholders and technical teams to deliver solutions that support business growth. Skills & Experience Required: Strong hands-on experience in Python, PySpark, SQL, Jupyter . Experience in Machine Learning engineering or data-focused development. Exposure to working in cloud platforms (AWS/Azure) . Ability to collaborate effectively with senior engineers More ❯
Knutsford, Cheshire East, Cheshire, United Kingdom
Synapri
the delivery of one of the organisation's key strategic initiatives. Experience required: AWS Data/ML Engineering & ML Ops (ECS, Sagemaker) CI/CD pipelines (GitLab, Jenkins) Python, PySpark & Big Data ecosystems AI/ML lifecycle, deployment & monitoring MLOps tooling (MLflow, Airflow, Docker, Kubernetes) Front-end exposure (HTML, Flask, Streamlit) RESTful APIs & backend integration If this ML Engineer More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
Team Leading experience - REQUIRED/Demonstrable on CV (Full Support from Engineering Manager is also available) Hands on development/engineering background Machine Learning or Data Background Technical Experience: PySpark, Python, SQL, Jupiter Cloud: AWS, Azure (Cloud Environment) - Moving towards Azure Nice to Have: Astro/Airflow, Notebook Reasonable Adjustments: Respect and equality are core values to us. We More ❯
contract assignment. In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯