Drive automation and CI/CD practices across the data platform Explore new technologies to improve data ingestion and self-service Essential Skills Azure Databricks : Expert in Spark (SQL, PySpark), Databricks Workflows Data Pipeline Design : Proven experience in scalable ETL/ELT development Azure Services : Data Lake, Blob Storage, Synapse Data Governance : Unity Catalog, access control, metadata management Performance More ❯
Key Skills: Strong SQL skills and experience with relational databases. Hands-on experience with Azure (ADF, Synapse, Data Lake) or AWS/GCP equivalents. Familiarity with scripting languages (Python, PySpark). Knowledge of data modelling and warehouse design (Kimball, Data Vault). Exposure to Power BI to support optimised data models for reporting. Agile team experience, CI/CD More ❯
is optimized. YOUR BACKGROUND AND EXPERIENCE 5 years of commercial experience working as a Data Engineer 3 years exposure to the Azure Stack - Data bricks, Synapse, ADF Python and PySpark Airflow for Orchestration Test-Driven Development and Automated Testing ETL Development More ❯
Knutsford, Cheshire, United Kingdom Hybrid / WFH Options
Experis
front-end development (HTML, Stream-lit, Flask Familiarity with model deployment and monitoring in cloud environments (AWS). Understanding of machine learning lifecycle and data pipelines. Proficiency with Python, Pyspark, Big-data ecosystems Hands-on experience with MLOps tools (e.g., MLflow, Airflow, Docker, Kubernetes) Secondary Skills Experience with RESTful APIs and integrating backend services All profiles will be reviewed More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Salt Search
EMEA to drive productivity and efficiency. Own sales operations functions including pipeline management, incentive compensation, deal desk, lead management, and contact centre operations . Use SQL and Python (Pandas, PySpark) to analyse data, automate workflows, and generate insights. Design and manage ETL/ELT processes, data models, and reporting automation . Leverage Databricks, Snowflake, and GCP to enable scalable More ❯
business-critical programme. Key Requirements: Proven experience as a Data Engineer, within Healthcare Proficiency in Azure Data Factory, Azure Synapse, Snowflake, and SQL. Strong Python skills, including experience with PySpark and metadata-driven frameworks. Familiarity with cloud platforms (Azure preferred), pipelines, and production code. Solid understanding of relational databases and data modelling (3NF & dimensional). Strong communication skills and More ❯
Warwick, Warwickshire, United Kingdom Hybrid / WFH Options
Pontoon
of Data Engineers. Essential Skills Extensive hands-on experience with Databricks - this is the core of the role. Strong background in Synapse and Azure DevOps. Proficiency in SQL and PySpark within a Databricks environment. Proven experience leading small engineering teams. Skilled in configuration management and technical documentation. If you're a Databricks expert looking for a role that blends More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
AWS/Azure - moving towards Azure). Collaborate with stakeholders and technical teams to deliver solutions that support business growth. Skills & Experience Required: Strong hands-on experience in Python, PySpark, SQL, Jupiter . Experience in Machine Learning engineering or data-focused development. Exposure to working in cloud platforms (AWS/Azure) . Ability to collaborate effectively with senior engineers More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
Senior Engineering Level Mentoring/Team Leading experience - Nice to have (Full Support from Engineering Manager) Hands on development/engineering background Machine Learning or Data background Technical Experience: PySpark, Python, SQL, Jupiter Cloud: AWS, Azure (Cloud Environment) - Moving towards Azure Nice to Have: Astro/Airflow, Notebook Reasonable Adjustments: Respect and equality are core values to us. We More ❯
WE NEED THE PYTHON/DATA ENGINEER TO HAVE. Current DV Security Clearance (Standard or Enhanced) Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Python/PySpark experience Experience With Palantir Foundry is nice to have Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test and More ❯
individuals across 100 countries and has a reach of 600 million users, is recruiting an MLOps Engineer who has Chatbot (Voice) integration project experience using Python, Pytorch, Pyspark and AWS LLM/Generative AI. Our client is paying £400 PD Outside IR 35 to start ASAP for an initial 6-month contract on a hybrid basis based near Stratford More ❯
Bracknell, Berkshire, United Kingdom Hybrid / WFH Options
Teksystems
in building scalable data solutions that empower market readiness. 3 months initial contract Remote working ( UK based) Inside IR35 Responsibilities Design, develop, and maintain data pipelines using Palantir Foundry, PySpark, and TypeScript. Collaborate with cross-functional teams to integrate data sources and ensure data quality and consistency. Implement robust integration and unit testing strategies to validate data workflows. Engage More ❯
Bracknell, Berkshire, United Kingdom Hybrid / WFH Options
Teksystems
Provide hands-on support for production environments, ensuring the stability and performance of data workflows. Troubleshoot and resolve issues related to data pipelines and integrations built using Palantir Foundry, PySpark, and TypeScript. Collaborate with engineering and business teams to understand requirements and deliver timely solutions. Support and improve continuous integration (CI) processes to streamline deployment and reduce downtime. Communicate More ❯
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯