Data Platform Engineer - ADF, Terraform, Synapse, Spark £Market Rate (Inside IR35) London/Hybrid 6 months We are currently working with a client who urgently require a Data Platform Engineer with hands on expertise in Synapse Spark Architecture, ADF and Terraform (IaC), to assist in designing and delivering complex cloud solutions. Key Requirements: Proven, hands-on experience with … Microsoft Azure as a Data Platform Engineer Advanced knowledge of Azure Data Factory (ADF), including network configurations and integration runtime options. Strong expertise in Synapse Analytics Spark, Azure Key Vault, and Data Lake Storage Accounts. Experience in efficient handling of large-scale data Deep understanding of Azure DevOps pipelines and deploying Terraform (Infrastructure as Code). Proven experience in … platform migration projects and delivering secure, scalable cloud solutions. Nice to have: Immediate availability Understanding of Synapse Spark Architecture Familiarity with a variety of technologies, including ApacheSpark, Spark ML, Serverless Spark Pools, TSQL, ETL Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary More ❯
Employment Type: Contract
Rate: £600.0 - £800.0 per day + £market rate (Inside IR35)
Data Platform Engineer - ADF, Terraform, Synapse, Spark £Market Rate (Inside IR35) London/Hybrid 6 months We are currently working with a client who urgently require a Data Platform Engineer with hands on expertise in Synapse Spark Architecture, ADF and Terraform (IaC), to assist in designing and delivering complex cloud solutions. Key Requirements: Proven, hands-on experience with … Microsoft Azure as a Data Platform Engineer Advanced knowledge of Azure Data Factory (ADF), including network configurations and integration runtime options. Strong expertise in Synapse Analytics Spark, Azure Key Vault, and Data Lake Storage Accounts. Experience in efficient handling of large-scale data Deep understanding of Azure DevOps pipelines and deploying Terraform (Infrastructure as Code). Proven experience in … platform migration projects and delivering secure, scalable cloud solutions. Nice to have: Immediate availability Understanding of Synapse Spark Architecture Familiarity with a variety of technologies, including ApacheSpark, Spark ML, Serverless Spark Pools, TSQL, ETL Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary More ❯
Azure, or GCP, with hands-on experience in cloud-based data services. Proficiency in SQL and Python for data manipulation and transformation. Experience with modern data engineering tools, including ApacheSpark, Kafka, and Airflow. Strong understanding of data modelling, schema design, and data warehousing concepts. Familiarity with data governance, privacy, and compliance frameworks (e.g., GDPR, ISO27001). Hands More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
pipelines. Understanding of data modelling, data warehousing concepts, and distributed computing. Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka). Knowledge of MLflow, Unity Catalog, or advanced Databricks features. Exposure to Terraform or other IaC tools. Experience working in Agile/Scrum environments. More ❯
pipelines. Understanding of data modelling, data warehousing concepts, and distributed computing. Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka). Knowledge of MLflow, Unity Catalog, or advanced Databricks features. Exposure to Terraform or other IaC tools. Experience working in Agile/Scrum environments. More ❯
data modelling, data warehousing, and ETL development. Hands-on experience with Azure Data Factory, Azure Data Lake, and Azure SQL Database. Exposure to big data technologies such as Hadoop, Spark, and Databricks. Experience with Azure Synapse Analytics or Cosmos DB. Familiarity with data governance frameworks (e.g., GDPR, HIPAA). Experience implementing CI/CD pipelines using Azure DevOps or More ❯
Edinburgh, Midlothian, Scotland, United Kingdom Hybrid/Remote Options
Lloyds Banking Group
life through automation, CI/CD, and modern cloud engineering practices. Wherever you land, youll be working with some of the biggest datasets in the UK using everything from Spark and statistical methods to domain knowledge and emerging GenAI applications. The work you could be doing Design and deploy machine learning models for fraud detection, credit risk, customer segmentation … and behavioural analytics using scalable frameworks like TensorFlow, PyTorch, and XGBoost. Engineer robust data pipelines and ML workflows using ApacheSpark, Vertex AI, and CI/CD tooling to ensure seamless model delivery and monitoring. Apply advanced techniques in deep learning, natural language processing (NLP), and statistical modelling to extract insights and drive decision-making. Explore and evaluate More ❯
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid/Remote Options
Lloyds Banking Group
life through automation, CI/CD, and modern cloud engineering practices. Wherever you land, youll be working with some of the biggest datasets in the UK using everything from Spark and statistical methods to domain knowledge and emerging GenAI applications. The work you could be doing Design and deploy machine learning models for fraud detection, credit risk, customer segmentation … and behavioural analytics using scalable frameworks like TensorFlow, PyTorch, and XGBoost. Engineer robust data pipelines and ML workflows using ApacheSpark, Vertex AI, and CI/CD tooling to ensure seamless model delivery and monitoring. Apply advanced techniques in deep learning, natural language processing (NLP), and statistical modelling to extract insights and drive decision-making. Explore and evaluate More ❯
life through automation, CI/CD, and modern cloud engineering practices. Wherever you land, youll be working with some of the biggest datasets in the UK using everything from Spark and statistical methods to domain knowledge and emerging GenAI applications. The work you could be doing Design and deploy machine learning models for fraud detection, credit risk, customer segmentation … and behavioural analytics using scalable frameworks like TensorFlow, PyTorch, and XGBoost. Engineer robust data pipelines and ML workflows using ApacheSpark, Vertex AI, and CI/CD tooling to ensure seamless model delivery and monitoring. Apply advanced techniques in deep learning, natural language processing (NLP), and statistical modelling to extract insights and drive decision-making. Explore and evaluate More ❯
life through automation, CI/CD, and modern cloud engineering practices. Wherever you land, youll be working with some of the biggest datasets in the UK using everything from Spark and statistical methods to domain knowledge and emerging GenAI applications. The work you could be doing Design and deploy machine learning models for fraud detection, credit risk, customer segmentation … and behavioural analytics using scalable frameworks like TensorFlow, PyTorch, and XGBoost. Engineer robust data pipelines and ML workflows using ApacheSpark, Vertex AI, and CI/CD tooling to ensure seamless model delivery and monitoring. Apply advanced techniques in deep learning, natural language processing (NLP), and statistical modelling to extract insights and drive decision-making. Explore and evaluate More ❯
life through automation, CI/CD, and modern cloud engineering practices. Wherever you land, youll be working with some of the biggest datasets in the UK using everything from Spark and statistical methods to domain knowledge and emerging GenAI applications. The work you could be doing Design and deploy machine learning models for fraud detection, credit risk, customer segmentation … and behavioural analytics using scalable frameworks like TensorFlow, PyTorch, and XGBoost. Engineer robust data pipelines and ML workflows using ApacheSpark, Vertex AI, and CI/CD tooling to ensure seamless model delivery and monitoring. Apply advanced techniques in deep learning, natural language processing (NLP), and statistical modelling to extract insights and drive decision-making. Explore and evaluate More ❯
life through automation, CI/CD, and modern cloud engineering practices. Wherever you land, youll be working with some of the biggest datasets in the UK using everything from Spark and statistical methods to domain knowledge and emerging GenAI applications. The work you could be doing Design and deploy machine learning models for fraud detection, credit risk, customer segmentation … and behavioural analytics using scalable frameworks like TensorFlow, PyTorch, and XGBoost. Engineer robust data pipelines and ML workflows using ApacheSpark, Vertex AI, and CI/CD tooling to ensure seamless model delivery and monitoring. Apply advanced techniques in deep learning, natural language processing (NLP), and statistical modelling to extract insights and drive decision-making. Explore and evaluate More ❯
Richmond, Surrey, United Kingdom Hybrid/Remote Options
Lexstra Plc
Architect and deliver secure, scalable data platforms across cloud environments (AWS, Azure, or GCP) Lead the design and hands-on development of data pipelines (batch & Real Time) using Python, Spark, and modern frameworks Define and enforce standards for data quality, validation, and regulatory auditing Collaborate with cross-functional teams and stakeholders to align data initiatives with business and compliance More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Syntax Consultancy Limited
data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred More ❯
EC4N 6JD, Vintry, United Kingdom Hybrid/Remote Options
Syntax Consultancy Ltd
data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Harnham - Data & Analytics Recruitment
platform. DevOps for ML: Build and automate robust CI/CD pipelines using GIT to ensure stable, reliable, and frequent model releases. Performance Engineering: Profile and optimise large-scale Spark/Python codebases for production efficiency, focusing on minimising latency and cost. Knowledge Transfer: Act as the technical lead to embed MLOps standards into the core Data Engineering team. … Proven experience designing and implementing end-to-end MLOps processes in a production environment. Cloud ML Stack: Expert proficiency with Databricks and MLflow . Big Data/Coding: Expert ApacheSpark and Python engineering experience on large datasets. Core Engineering: Strong experience with GIT for version control and building CI/CD/release pipelines. Data Fundamentals: Excellent … Familiarity with low-latency data stores (e.g., CosmosDB ). If you have the capability to bring MLOps maturity to a traditional Engineering team using the MLFlow/Databricks/Spark stack, please email: with your CV and contract details. More ❯
ensure data integrity and reliability. Optimise data workflows for performance, cost efficiency, and maintainability using modern data-engineering tools and platforms (e.g., Azure Data Factory, AWS Data Pipeline, Databricks, ApacheSpark). Support the integration of data into visualisation platforms and analytical environments (e.g., Power BI, ServiceNow). Ensure adherence to data governance, security, and privacy policies. Document More ❯
ensure data integrity and reliability. Optimise data workflows for performance, cost efficiency, and maintainability using modern data-engineering tools and platforms (e.g., Azure Data Factory, AWS Data Pipeline, Databricks, ApacheSpark). Support the integration of data into visualisation platforms and analytical environments (e.g., Power BI, ServiceNow). Ensure adherence to data governance, security, and privacy policies. Document More ❯
Years Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks . Good proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
Job Description:- Essential Skills & Experience: ·10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. ·Good proficiency in Python and Spark (PySpark) or Scala. ·Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. ·Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
teams to build scalable data pipelines and contribute to digital transformation initiatives across government departments. Key Responsibilities Design, develop and maintain robust data pipelines using PostgreSQL and Airflow or ApacheSpark Collaborate with frontend/backend developers using Node.js or React Implement best practices in data modelling, ETL processes and performance optimisation Contribute to containerised deployments (Docker/… within Agile teams and support DevOps practices What We're Looking For Proven experience as a Data Engineer in complex environments Strong proficiency in PostgreSQL and either Airflow or Spark Solid understanding of Node.js or React for integration and tooling Familiarity with containerisation technologies (Docker/Kubernetes) is a plus Excellent communication and stakeholder engagement skills Experience working within More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Executive Facilities
domains. Proficiency in SQL for data extraction, transformation, and pipeline development. Experience with dashboarding and visualization tools (Tableau, Qlik, or similar). Familiarity with big data tools (Snowflake, Databricks, Spark) and ETL processes. Useful experience; Python or R for advanced analytics, automation, or experimentation support. Knowledge of statistical methods and experimentation (A/B testing) preferred. Machine learning and More ❯
Preferred: Experience in front-office roles or collaboration with trading desks Familiarity with financial instruments across asset classes (equities, FX, fixed income, derivatives) Experience with distributed computing frameworks (e.g., Spark, Dask) and cloud-native ML pipelines Exposure to LLMs, graph learning, or other advanced AI methods Strong publication record or open-source contributions in ML or quantitative finance Please More ❯
Preferred: Experience in front-office roles or collaboration with trading desks Familiarity with financial instruments across asset classes (equities, FX, fixed income, derivatives) Experience with distributed computing frameworks (e.g., Spark, Dask) and cloud-native ML pipelines Exposure to LLMs, graph learning, or other advanced AI methods Strong publication record or open-source contributions in ML or quantitative finance Please More ❯
a large Aerospace company is looking for a experienced Senior Data Engineer with to assist with building and managing data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi Key Responsibilities: Design, develop, and maintain secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi. Implement data ingestion, transformation, and integration … handover and audit readiness. Required Skillset: Experience working in government, defence, or highly regulated industries with knowledge of relevant standards. Experience with additional data processing and ETL tools like Apache Kafka, Spark, or Hadoop Familiarity with containerization and orchestration tools such as Docker and Kubernetes. Experience with monitoring and alerting tools such as Prometheus, Grafana, or ELK for More ❯