Databricks, or equivalent) Proficiency in ELT/ETL development using tools such as Data Factory, Dataflow Gen2, Databricks Workflows, or similar orchestration frameworks Experience with Python and/or PySpark for data transformation, automation, or pipeline development Familiarity with cloud services and deployment automation (e.g., Azure, AWS, Terraform, CI/CD, Git) Ability to deliver clear, insightful, and performant More ❯
Databricks, or equivalent) Proficiency in ELT/ETL development using tools such as Data Factory, Dataflow Gen2, Databricks Workflows, or similar orchestration frameworks Experience with Python and/or PySpark for data transformation, automation, or pipeline development Familiarity with cloud services and deployment automation (e.g., Azure, AWS, Terraform, CI/CD, Git) Ability to deliver clear, insightful, and performant More ❯
DevOps best practices. Collaborate with BAs on source-to-target mapping and build new data model components. Participate in Agile ceremonies (stand-ups, backlog refinement, etc.). Essential Skills: PySpark and SparkSQL. Strong knowledge of relational database modelling Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes). Azure platform experience. ADF or Synapse pipelines for orchestration. PythonMore ❯
Drive automation and CI/CD practices across the data platform Explore new technologies to improve data ingestion and self-service Essential Skills Azure Databricks : Expert in Spark (SQL, PySpark), Databricks Workflows Data Pipeline Design : Proven experience in scalable ETL/ELT development Azure Services : Data Lake, Blob Storage, Synapse Data Governance : Unity Catalog, access control, metadata management Performance More ❯
technical stakeholders. Business acumen with a focus on delivering data-driven value. Strong understanding of risk, controls, and compliance in data management. Technical Skills: Hands-on experience with Python, PySpark, and SQL . Experience with AWS (preferred). Knowledge of data warehousing (DW) concepts and ETL processes. Familiarity with DevOps principles and secure coding practices. Experience: Proven track record More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
Datatech Analytics
and dimensional data models, and contribute to fostering a culture of innovation and evidence led decision-making. Experience Required: Essential experience with Azure Databricks - including Unity Catalog, Python (ideally PySpark), and SQL. Practical knowledgeof modern ELT workflows, with a focus on the Extract and Load stages. Experience working across both technical and non-technical teams, with the ability to More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Datatech
and dimensional data models, and contribute to fostering a culture of innovation and evidence led decision-making. Experience Required: Essential experience with Azure Databricks - including Unity Catalog, Python (ideally PySpark), and SQL. Practical knowledge of modern ELT workflows, with a focus on the Extract and Load stages. Experience working across both technical and non-technical teams, with the ability More ❯
we would like to discuss with you. Please note this role requires onsite attendance once a week and has been deemed inside IR35. Requirement: Experience in Azure Synapse, ETL, Pyspark, SQL , data Modelling and Data Bricks. Design and develop Azure Pipelines including data transformation and data cleansing Document source-to-target mappings Re-engineer manual data flows to enable More ❯
business-critical programme. Key Requirements: Proven experience as a Data Engineer, within Healthcare Proficiency in Azure Data Factory, Azure Synapse, Snowflake, and SQL. Strong Python skills, including experience with PySpark and metadata-driven frameworks. Familiarity with cloud platforms (Azure preferred), pipelines, and production code. Solid understanding of relational databases and data modelling (3NF & dimensional). Strong communication skills and More ❯
and manage environment promotion Maintain configuration, documentation, and monitoring Mentor and support Data Engineers Required Skills: Strong experience with Databricks , Azure Synapse , and Azure DevOps Proficient in SQL and PySpark Proven leadership in small engineering teams Skilled in configuration management and documentation Location: Warwickshire Type: Contract - 6 Months initially (high chance of extension) IR35 Status: Inside IR35 Rate: Open More ❯
and manage environment promotion Maintain configuration, documentation, and monitoring Mentor and support Data Engineers Required Skills: Strong experience with Databricks , Azure Synapse , and Azure DevOps Proficient in SQL and PySpark Proven leadership in small engineering teams Skilled in configuration management and documentation Location: Warwickshire Type: Contract - 6 Months initially (high chance of extension) IR35 Status: Inside IR35 Rate: Open More ❯
and manage environment promotion Maintain configuration, documentation, and monitoring Mentor and support Data Engineers Required Skills: Strong experience with Databricks, Azure Synapse, and Azure DevOps Proficient in SQL and PySpark Proven leadership in small engineering teams Skilled in configuration management and documentation Location: Warwickshire Type: Contract - 6 Months initially (high chance of extension) IR35 Status: Inside IR35 Rate: Open More ❯
Warwick, Warwickshire, United Kingdom Hybrid / WFH Options
Pontoon
of Data Engineers. Essential Skills Extensive hands-on experience with Databricks - this is the core of the role. Strong background in Synapse and Azure DevOps. Proficiency in SQL and PySpark within a Databricks environment. Proven experience leading small engineering teams. Skilled in configuration management and technical documentation. If you're a Databricks expert looking for a role that blends More ❯
setting up CI/CD pipelines (Azure DevOps, GitHub Actions, or GitLab CI targeting Azure) Good understanding of data in software applications, including experience with: Data libraries (Pandas, NumPy, PySpark) Data-driven solutions with MongoDB Building features that rely on performance monitoring, analytics, or metrics Start Date: ASAP Length: Initial 6 months Location: UK Remote IR 35 Status: Outside More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
AWS/Azure - moving towards Azure). Collaborate with stakeholders and technical teams to deliver solutions that support business growth. Skills & Experience Required: Strong hands-on experience in Python, PySpark, SQL, Jupiter . Experience in Machine Learning engineering or data-focused development. Exposure to working in cloud platforms (AWS/Azure) . Ability to collaborate effectively with senior engineers More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
Senior Engineering Level Mentoring/Team Leading experience - Nice to have (Full Support from Engineering Manager) Hands on development/engineering background Machine Learning or Data background Technical Experience: PySpark, Python, SQL, Jupiter Cloud: AWS, Azure (Cloud Environment) - Moving towards Azure Nice to Have: Astro/Airflow, Notebook Reasonable Adjustments: Respect and equality are core values to us. We More ❯
data architecture, data modelling, and big data platforms. Proven expertise in Lakehouse Architecture, particularly with Databricks. Hands-on experience with tools such as Azure Data Factory, Unity Catalog, Synapse, PySpark, Power BI, SQL Server, Cosmos DB, and Python. In-depth knowledge of data governance frameworks and best practices. Solid understanding of cloud-native architectures and microservices in data environments. More ❯
individuals across 100 countries and has a reach of 600 million users, is recruiting an MLOps Engineer who has Chatbot (Voice) integration project experience using Python, Pytorch, Pyspark and AWS LLM/Generative AI. Our client is paying £400 PD Outside IR 35 to start ASAP for an initial 6-month contract on a hybrid basis based near Stratford More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
current Solution - Identify where possible and if possible, the solution works for Argos & Nectar Products. Experience Required: MTA (Multi-Touch Attribution) and MMM (Marketing Mix Modelling) Experience Python SQL Pyspark Cloud Technology - Ideally AWS or Azure Machine Learning Experience Accumetric Metrics Experience of working with multi-functional teams Sanderson is committed to barrier-free and inclusive recruitment. We are More ❯
Bracknell, Berkshire, United Kingdom Hybrid / WFH Options
Teksystems
in building scalable data solutions that empower market readiness. 3 months initial contract Remote working ( UK based) Inside IR35 Responsibilities Design, develop, and maintain data pipelines using Palantir Foundry, PySpark, and TypeScript. Collaborate with cross-functional teams to integrate data sources and ensure data quality and consistency. Implement robust integration and unit testing strategies to validate data workflows. Engage More ❯
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: Please screen the profile before interview. At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: = Scala/Spark Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL Linux More ❯
Bracknell, Berkshire, United Kingdom Hybrid / WFH Options
Teksystems
Provide hands-on support for production environments, ensuring the stability and performance of data workflows. Troubleshoot and resolve issues related to data pipelines and integrations built using Palantir Foundry, PySpark, and TypeScript. Collaborate with engineering and business teams to understand requirements and deliver timely solutions. Support and improve continuous integration (CI) processes to streamline deployment and reduce downtime. Communicate More ❯
Central London, London, United Kingdom Hybrid / WFH Options
iDPP
someone who enjoys building scalable data solutions while staying close to business impact. The Role As a Data Analytics Engineer , youll design, build, and maintain reliable data pipelinesprimarily using PySpark, SQL, and Python to ensure business teams (analysts, product managers, finance, operations) have access to well-modeled, actionable data. Youll work closely with stakeholders to translate business needs into … spend more time coding, managing data infrastructure, and ensuring pipeline reliability. Who Were Looking For Data Analytics : Analysts who have strong experience building and maintaining data pipelines (particularly in PySpark/SQL ) and want to work on production-grade infrastructure. Data Engineering : Engineers who want to work more closely with business stakeholders and enable analytics-ready data solutions. Analytics … Professionals who already operate in this hybrid space, with proven expertise across big data environments, data modeling, and business-facing delivery. Key Skills & Experience Strong hands-on experience with PySpark, SQL, and Python Proven track record of building and maintaining data pipelines Ability to translate business requirements into robust data models and solutions Experience with data validation, quality checks More ❯
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
Do Build & optimise recommendation/personalisation models . Drive incremental targeting beyond repeat-purchase patterns. Apply predictive analytics to customer behaviour & purchase history. Use Python (essential), SQL , and ideally PySpark to deliver insights. Collaborate with Product, Content, and Data Science to align models with business goals. Translate data into clear, actionable insights. (Bonus) Explore AI-driven ad content opportunities. … What We're Looking For Proven experience with predictive modelling/recommender systems . Strong Python & SQL skills (essential). Exposure to PySpark (desirable). Strong communicator with ability to link data to business outcomes. (Bonus) Experience with Generative AI or content automation. More ❯