London, South East, England, United Kingdom Hybrid / WFH Options
FDM Group
of the functions, leveraging cutting-edge technologies to deliver meaningful insights supporting Artificial Intelligence (AI) and Machine Learning (ML) driven solutions. Responsibilities Design, build, and optimize data pipelines using ApacheSpark and Snowflake Collaborate with data scientists and analysts to support AI/ML model development and deployment Work closely with stakeholders to understand business requirements and translate … analytical solutions Manage data warehouses ensuring data organisation and optimisation Monitor data systems for failures, enhancing database performance Requirements Minimum of 5 years’ experience as a Data Engineer with ApacheSpark and Snowflake in a production environment Strong understanding of AI/ML concepts, with demonstrable experience in supporting or implementing ML models Proficiency in Python or Scala More ❯
further details or to enquire about other roles, please contact Nick Mandella at Harnham. KEYWORDS Python, SQL, AWS, GCP, Azure, Cloud, Databricks, Docker, Kubernetes, CI/CD, Terraform, Pyspark, Spark, Kafka, machine learning, statistics, Data Science, Data Scientist, Big Data, Artificial Intelligence, private equity, finance. More ❯
Reading, England, United Kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
slough, south east england, united kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
Reigate, Surrey, England, United Kingdom Hybrid / WFH Options
esure Group
exposure to cloud-native data infrastructures (Databricks, Snowflake) especially in AWS environments is a plus Experience in building and maintaining batch and streaming data pipelines using Kafka, Airflow, or Spark Familiarity with governance frameworks, access controls (RBAC), and implementation of pseudonymisation and retention policies Exposure to enabling GenAI and ML workloads by preparing model-ready and vector-optimised datasets More ❯
and Data Practice. You will have the following experience : 8+ years of experience in data engineering or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, ApacheSpark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with Delta Lake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication More ❯
and Data Practice. You will have the following experience : 8+ years of experience in data engineering or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, ApacheSpark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with Delta Lake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication More ❯
london (city of london), south east england, united kingdom
Capgemini
and Data Practice. You will have the following experience : 8+ years of experience in data engineering or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, ApacheSpark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with Delta Lake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication More ❯
Oxford, England, United Kingdom Hybrid / WFH Options
Akrivia Health
cloud technologies and modern engineering practices. ● Experience with the following technologies: o Cloud Provider: AWS o Languages: Python, PHP, Rust & SQL o Hosting: Kubernetes o Tooling & Analytics: Airflow, RabbitMQ, ApacheSpark, PowerBI ● Proven ability to complete projects according to outlined scope, budget, and timeline ● Experience with industry standard tools such as Microsoft products, Jira, confluence, project management tools More ❯
banbury, south east england, united kingdom Hybrid / WFH Options
Akrivia Health
cloud technologies and modern engineering practices. ● Experience with the following technologies: o Cloud Provider: AWS o Languages: Python, PHP, Rust & SQL o Hosting: Kubernetes o Tooling & Analytics: Airflow, RabbitMQ, ApacheSpark, PowerBI ● Proven ability to complete projects according to outlined scope, budget, and timeline ● Experience with industry standard tools such as Microsoft products, Jira, confluence, project management tools More ❯
observability. Preferred Qualifications Exposure to machine learning workflows, model lifecycle management, or data engineering platforms. Experience with distributed systems, event-driven architectures (e.g., Kafka), and big data platforms (e.g., Spark, Databricks). Familiarity with banking or financial domain use cases, including data governance and compliance-focused development. Knowledge of platform security, monitoring, and resilient architecture patterns. More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hlx Technology
data infrastructure or data platforms, with proven ability to solve complex distributed systems challenges independently Expertise in large-scale data processing pipelines (batch and streaming) using technologies such as Spark, Kafka, Flink, or Beam Experience designing and implementing large-scale data storage systems (feature stores, timeseries databases, warehouses, or object stores) Strong distributed systems and infrastructure skills (Kubernetes, Terraform More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hlx Technology
data infrastructure or data platforms, with proven ability to solve complex distributed systems challenges independently Expertise in large-scale data processing pipelines (batch and streaming) using technologies such as Spark, Kafka, Flink, or Beam Experience designing and implementing large-scale data storage systems (feature stores, timeseries databases, warehouses, or object stores) Strong distributed systems and infrastructure skills (Kubernetes, Terraform More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Hlx Technology
data infrastructure or data platforms, with proven ability to solve complex distributed systems challenges independently Expertise in large-scale data processing pipelines (batch and streaming) using technologies such as Spark, Kafka, Flink, or Beam Experience designing and implementing large-scale data storage systems (feature stores, timeseries databases, warehouses, or object stores) Strong distributed systems and infrastructure skills (Kubernetes, Terraform More ❯
Reigate, Surrey, England, United Kingdom Hybrid / WFH Options
esure Group
version control, e.g., Git. Knowledge of OO programming, software design, i.e., SOLID principles, and testing practices. Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark, geospatial data/modelling Exposure to MLOps, model monitoring principles, CI/CD and associated tech, e.g., Docker, MLflow, k8s, FastAPI etc. are a plus. Additional Information What’s More ❯
data quality, or other areas directly relevant to data engineering responsibilities and tasks Proven project experience developing and maintaining data warehouses in big data solutions (Snowflake) Expert knowledge in Apache technologies such as Kafka, Airflow, and Spark to build scalable and efficient data pipelines Ability to design, build, and deploy data solutions that capture, explore, transform, and utilize More ❯
on technically/40% hands-off leadership and strategy Proven experience designing scalable data architectures and pipelines Strong Python, SQL, and experience with tools such as Airflow, dbt, and Spark Cloud expertise (AWS preferred), with Docker/Terraform A track record of delivering in fast-paced, scale-up environments Nice to have: Experience with streaming pipelines, MLOps, or modern More ❯
Employment Type: Full-Time
Salary: £110,000 - £120,000 per annum, Inc benefits
Pydantic) for document processing, summarization, and clinical Q&A systems. Develop and optimize predictive models using scikit-learn, PyTorch, TensorFlow, and XGBoost. Design robust data pipelines using tools like Spark and Kafka for real-time and batch processing. Manage ML lifecycle with tools such as Databricks , MLflow , and cloud-native platforms (Azure preferred). Collaborate with engineering teams to More ❯
for new and existing diseases, and a pattern of continuous learning and development is mandatory. Key Responsibilities Build data pipelines using modern data engineering tools on Google Cloud: Python, Spark, SQL, BigQuery, Cloud Storage. Ensure data pipelines meet the specific scientific needs of data consuming applications. Responsible for high quality software implementations according to best practices, including automated test More ❯
for new and existing diseases, and a pattern of continuous learning and development is mandatory. Key Responsibilities Build data pipelines using modern data engineering tools on Google Cloud: Python, Spark, SQL, BigQuery, Cloud Storage. Ensure data pipelines meet the specific scientific needs of data consuming applications. Responsible for high quality software implementations according to best practices, including automated test More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with ApacheSpark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with ApacheSpark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with ApacheSpark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me More ❯
using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯