London, South East, England, United Kingdom Hybrid/Remote Options
Harnham - Data & Analytics Recruitment
platform. DevOps for ML: Build and automate robust CI/CD pipelines using GIT to ensure stable, reliable, and frequent model releases. Performance Engineering: Profile and optimise large-scale Spark/Python codebases for production efficiency, focusing on minimising latency and cost. Knowledge Transfer: Act as the technical lead to embed MLOps standards into the core Data Engineering team. … Proven experience designing and implementing end-to-end MLOps processes in a production environment. Cloud ML Stack: Expert proficiency with Databricks and MLflow . Big Data/Coding: Expert ApacheSpark and Python engineering experience on large datasets. Core Engineering: Strong experience with GIT for version control and building CI/CD/release pipelines. Data Fundamentals: Excellent … Familiarity with low-latency data stores (e.g., CosmosDB ). If you have the capability to bring MLOps maturity to a traditional Engineering team using the MLFlow/Databricks/Spark stack, please email: with your CV and contract details. More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom Hybrid/Remote Options
Reed
help businesses unlock the power of their data. Contribute to internal projects and ongoing training to expand your expertise. What We’re Looking For Experience building data pipelines using Spark or Pandas . Familiarity with major cloud platforms (AWS, Azure, or GCP). Understanding of big data tools (EMR, Databricks, DataProc). Knowledge of data architectures (Data Lakes, Warehouses More ❯
scripting (Python, Bash) and programming (Java). Hands-on experience with DevOps tools : GitLab, Ansible, Prometheus, Grafana, Nagios, Argo CD, Rancher, Harbour. Deep understanding of big data technologies : Hadoop, Spark, and NoSQL databases. Nice to Have Familiarity with agile methodologies (Scrum or Kanban). Strong problem-solving skills and a collaborative working style. Excellent communication skills , with the ability More ❯
Python, Bash) and programming languages (Java). Hands-on experience with DevOps tools : GitLab, Ansible, Prometheus, Grafana, Nagios, Argo CD, Rancher, Harbour. Deep understanding of big data technologies : Hadoop, Spark, and NoSQL databases. Nice to Have Familiarity with agile methodologies (Scrum or Kanban). Strong problem-solving skills and a collaborative working style. Excellent communication skills , with the ability More ❯
and technology teams. Exposure to low-latency or real-time systems. Experience with cloud infrastructure (AWS, GCP, or Azure). Familiarity with data engineering tools such as Kafka, Airflow, Spark, or Dask. Knowledge of equities, futures, or FX markets. Company Rapidly growing hedge fund offices globally including London Salary & Benefits The salary range/rates of pay is dependent More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Opus Recruitment Solutions
frameworks, and cloud-based data platforms (AWS, Azure, or GCP). Proven track record in credit risk modelling, fraud analytics, or similar financial domains. Familiarity with big data technologies (Spark, Hive) and MLOps practices for production-scale deployments. Excellent communication skills to engage stakeholders and simplify complex concepts. Desirable Extras Experience with regulatory frameworks (e.g., Basel, GDPR) and model More ❯
frameworks, and cloud-based data platforms (AWS, Azure, or GCP). Proven track record in credit risk modelling, fraud analytics, or similar financial domains. Familiarity with big data technologies (Spark, Hive) and MLOps practices for production-scale deployments. Excellent communication skills to engage stakeholders and simplify complex concepts. Desirable Extras Experience with regulatory frameworks (e.g., Basel, GDPR) and model More ❯
on leadership and communication, ensuring all key builds and improvements flow through this individual. Working with a modern tech stack including AWS, Snowflake, Python, SQL, DBT, Airflow, Spark, Kafka, and Terraform, you'll drive automation and end-to-end data solutions that power meaningful insights. Ideal for ambitious, proactive talent from scale-up or start-up environments, this position More ❯
on leadership and communication, ensuring all key builds and improvements flow through this individual. Working with a modern tech stack including AWS, Snowflake, Python, SQL, DBT, Airflow, Spark, Kafka, and Terraform, you'll drive automation and end-to-end data solutions that power meaningful insights. Ideal for ambitious, proactive talent from scale-up or start-up environments, this position More ❯
reliability Enjoy experimenting with emerging technologies and tools Value writing clean, modular, and maintainable code Are excited to learn more about financial markets and trading systems Bonus experience: Ruby, Spark, Trino, Kafka Financial markets exposure SQL (Postgres, Oracle) Cloud-native deployments (AWS, Docker, Kubernetes) Observability tools (Splunk, Prometheus, Grafana) Why Apply? This is a fantastic opportunity to join a More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Hunter Bond
reliability Enjoy experimenting with emerging technologies and tools Value writing clean, modular, and maintainable code Are excited to learn more about financial markets and trading systems Bonus experience: Ruby, Spark, Trino, Kafka Financial markets exposure SQL (Postgres, Oracle) Cloud-native deployments (AWS, Docker, Kubernetes) Observability tools (Splunk, Prometheus, Grafana) Why Apply? This is a fantastic opportunity to join a More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom Hybrid/Remote Options
Reed
technologies. Spend time on internal projects, training, and development to expand expertise and contribute to business-critical client projects. Required Skills & Qualifications: Demonstrable experience in building data pipelines using Spark or Pandas. Experience with major cloud providers (AWS, Azure, or Google). Familiarity with big data platforms (EMR, Databricks, or DataProc). Knowledge of data platforms such as Data More ❯
infrastructure-as-code Docker; Kubernetes (EKS, GKE, AKS); Jenkins, GitLab CI, or GitHub Actions; Terraform or CloudFormation; Prometheus, Grafana, Datadog, or New Relic; Slurm, Torque, LSF; MPI; Hadoop or Spark;Director of In Experience with high-performance computing, distributed systems, and observability tools Strong communication and executive presence, with the ability to translate complex technical concepts for diverse audiences More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Harnham
infrastructure-as-code Docker; Kubernetes (EKS, GKE, AKS); Jenkins, GitLab CI, or GitHub Actions; Terraform or CloudFormation; Prometheus, Grafana, Datadog, or New Relic; Slurm, Torque, LSF; MPI; Hadoop or Spark;Director of In Experience with high-performance computing, distributed systems, and observability tools Strong communication and executive presence, with the ability to translate complex technical concepts for diverse audiences More ❯
Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. You don't need to More ❯
Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. You don't need to More ❯
a plus). Experience with model lifecycle management (MLOps), including monitoring, retraining, and model versioning. Ability to work across data infrastructure, from SQL to large-scale distributed data tools (Spark, etc.). Strong written and verbal communication skills, especially in cross-functional contexts. Bonus Experience (Nice to Have) Exposure to large language models (LLMs) or foundational model adaptation. Previous More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Method Resourcing
a plus). Experience with model lifecycle management (MLOps), including monitoring, retraining, and model versioning. Ability to work across data infrastructure, from SQL to large-scale distributed data tools (Spark, etc.). Strong written and verbal communication skills, especially in cross-functional contexts. Bonus Experience (Nice to Have) Exposure to large language models (LLMs) or foundational model adaptation. Previous More ❯
IT, STEM, Maths, Computer Science) or equivalent experience Good data modelling, software engineering knowledge, and strong knowledge of ML packages and frameworks Skilful in writing well-engineered code using Spark, and advanced SQL and Python coding skills Experienced in working with Azure Databricks Proven experience working with Data Scientists to deliver best in-class solutions for model deployment and More ❯
e.g. AWS, Azure. Good knowledge of Linux, it's development environments and tools Have experience in object-oriented methodologies, design patterns Understanding of Big Data technologies such as Hadoop, Spark Understanding of security implications and secure coding Proven grasp of software development lifecycle best-practices, agile methods, and conventions, including Source Code Management, Continuous Integration Practical experience with agile More ❯
Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and … best practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and … best practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Peaple Talent
delivered solutions in Google Cloud Platform (GCP) Strong experience designing and delivering data solutions using BigQuery Proficient in SQL and Python Experience working with Big Data technologies such as ApacheSpark or PySpark Excellent communication skills, with the ability to engage effectively with senior stakeholders Nice to haves: GCP Data Engineering certifications BigQuery or other GCP tool certifications More ❯