language models. Comfortable with cloud platforms (Azure preferred), CI/CD tools, and containerization (Docker, Kubernetes). Experience with monitoring and maintaining ML systems in production, using tools like MLflow, Weights & Biases, or similar. Strong communication skills and ability to work across disciplines with ML scientists, engineers, and stakeholders. Preferred Qualifications PhD in Computer Science, Machine Learning, Engineering , or a More ❯
/or real-time systems Have knowledge of DevOps technologies such as Docker and Terraform, building APIs, CI/CD processes and tools, and MLOps practices and platforms like MLFlow and monitoring Have experience with agile delivery methodologies Have good communication skills Have an advanced degree in Computer Science, Mathematics or a similar quantitative discipline Nice to have Hands-on … technology stack Python and associated ML/DS libraries (Scikit-learn, Numpy, LightlGBM, Pandas, TensorFlow, etc ) PySpark AWS cloud infrastructure: EMR, ECS, S3, Athena, etc. MLOps: Terraform, Docker, Airflow, MLFlow, Jenkins On call statement: Please be aware that our Machine Learning Engineers are required to be a part of the technology on-call rota. More details on how this works More ❯
Python and associated ML/DS libraries (Scikit-learn, Numpy, LightlGBM, Pandas, LangChain/LangGraph, TensorFlow, etc ) PySpark AWS cloud infrastructure: EMR, ECS, Athena, etc. MLOps: Terraform, Docker, Airflow, MLFlow More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for-1 share purchase plans, an EV Scheme to further reduce carbon emissions More ❯
Python and associated ML/DS libraries (scikit-learn, numpy, LightlGBM, Pandas, LangChain/LangGraph TensorFlow, etc ) PySpark AWS cloud infrastructure: EMR, ECS, Athena, etc. MLOps: Terraform, Docker, Airflow, MLFlow More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for-1 share purchase plans, an EV Scheme to further reduce carbon emissions More ❯
techniques. Have experience with Cloud infrastructure (ideally AWS), DevOps technologies such as Docker or Terraform and CI/CD processes and tools. Have previously worked with MLOps tools like MLFlow and Airflow, or on common problems such as model and API monitoring, data drift and validation, autoscaling, access permissions Have previously worked with monitoring tools such as New Relic or … associated ML/DS libraries (scikit-learn, numpy, pandas, LightGBM, LangChain/LangGraph, TensorFlow, etc ) PySpark AWS cloud infrastructure: EMR, ECS, ECR, Athena, etc. MLOps: Terraform, Docker, Spacelift, Airflow, MLFlow Monitoring: New Relic CI/CD: Jenkins, Github Actions More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for-1 share purchase More ❯
as they scale their team and client base. Key Responsibilities: Architect and implement end-to-end, scalable data and AI solutions using the Databricks Lakehouse (Delta Lake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the adoption of Lakehouse architecture (bronze/silver/gold layers) to … governance, security, and compliance, including Unity Catalog. Excellent communication, leadership, and problem-solving skills. Desirable: Databricks certifications (e.g., Data Engineer Associate/Professional or Solutions Architect). Familiarity with MLflow, dbt, and BI tools such as Power BI or Tableau. Exposure to MLOps practices and deploying ML models within Databricks. Experience working within Agile and DevOps-driven delivery environments. More ❯
The Machine Learning (ML) Practice team is a highly specialized customer-facing ML team at Databricks facing an increasing demand for Large Language Model (LLM)-based solutions. We deliver professional services engagements to help our customers build, scale, and optimize More ❯
and ETL processes Good knowledge of ML ops principles and best practices to deploy, monitor and maintain machine learning models in production Familiarity with Git CI/CD and MLflow for managing and tracking code deployment or model versions Experience with cloud-based data platforms such as AWS or Google Cloud Platform Nice to have: Experience with Kafka Proven track More ❯
Crawley, Sussex, United Kingdom Hybrid / WFH Options
Thales Group
as TensorFlow, PyTorch, Scikit-learn, and Keras. Understanding of algorithms and techniques for supervised and unsupervised learning. Experience with tools for model monitoring, logging, and performance evaluation, such as MLflow or Prometheus. Strong scripting skills in Bash, PowerShell, or similar scripting languages for automation of tasks and ability to write reusable and maintainable code to streamline ML operations Proficient in More ❯
Bedrock, S3, EC2, Lambda, IAM, VPC, ECS/EKS. Proficiency in Infrastructure-as-Code using AWS CDK or CloudFormation. Experience implementing and scaling MLOps workflows with tools such as MLflow, SageMaker Pipelines. Proven experience building, containerising, and deploying using Docker and Kubernetes. Hands-on experience with CI/CD tools (GitHub Actions, CodePipeline, Jenkins) and version control using Git/ More ❯
Bedrock, S3, EC2, Lambda, IAM, VPC, ECS/EKS. Proficiency in Infrastructure-as-Code using AWS CDK or CloudFormation. Experience implementing and scaling MLOps workflows with tools such as MLflow, SageMaker Pipelines. Proven experience building, containerising, and deploying using Docker and Kubernetes. Hands-on experience with CI/CD tools (GitHub Actions, CodePipeline, Jenkins) and version control using Git/ More ❯
technical teams, ideally in cloud-first environments (Azure, AWS, or GCP) Proficiency in Python, SQL , and cloud-native data tools Solid understanding of MLOps , including model lifecycle management (e.g. MLflow), containers (Docker/Kubernetes), and monitoring Experience delivering Data-as-a-Service products and APIs Excellent communication skills - able to explain complex concepts to both technical and non-technical audiences More ❯
Worcestershire, United Kingdom Hybrid / WFH Options
Tria
workflows, containerisation (e.g., Docker, Kubernetes) and production-grade APIs Understanding of data governance, privacy and regulatory compliance (e.g., GDPR) Nice to have: Familiarity with Infrastructure as Code (e.g., Ansible), MLFlow, or orchestration frameworks Background in both object-oriented and functional programming paradigms Please note: Visa sponsorship is unfortunately not available for this role. Applicants must have the right to work More ❯
a related discipline, with a BSc required and an MSc considered advantageous. Experienced in using machine learning frameworks such as Scikit-learn, Keras, and PyTorch, with additional familiarity with MLFlow and AzureML seen as a positive. Have working knowledge of CI/CD practices, ML Ops, ML pipelines, automated testing, and platforms such as AzureML, Google Cloud, or AWS. Possess More ❯
Lead Machine Learning Engineer - LLMs - Ramboll Tech At Ramboll Tech, we believe innovation thrives in diverse, supportive environments where everyone can contribute their best ideas. As a Lead Machine Learning Engineer, you will create cutting-edge AI solutions, mentor others More ❯
What You'll Do - Design and build an end-to-end MLOps pipeline using AWS , with a strong focus on SageMaker for training, deployment, and hosting. - Integrate and operationalize MLflow for model versioning, experiment tracking, and reproducibility. - Architect and implement a feature store strategy for consistent, discoverable, and reusable features across training and inference environments (e.g., using SageMaker Feature Store … years of experience in MLOps, DevOps, or ML infrastructure roles. - Deep familiarity with AWS services , especially SageMaker , S3, Lambda, CloudWatch, IAM, and optionally Glue or Athena. - Strong experience with MLflow , experiment tracking , and model versioning. - Proven experience setting up and managing a feature store , and driving best practices for feature engineering in production systems . - Proficiency in model testing strategies More ❯
record delivering production-grade ML models Solid grasp of MLOps best practices Confident speaking to technical and non-technical stakeholders 🛠️ Tech you’ll be using: Python, SQL, Spark, R MLflow, vector databases GitHub/GitLab/Azure DevOps Jira, Confluence 🎓 Bonus points for: MSc/PhD in ML or AI Databricks ML Engineer (Professional) certified More ❯
record delivering production-grade ML models Solid grasp of MLOps best practices Confident speaking to technical and non-technical stakeholders 🛠️ Tech you’ll be using: Python, SQL, Spark, R MLflow, vector databases GitHub/GitLab/Azure DevOps Jira, Confluence 🎓 Bonus points for: MSc/PhD in ML or AI Databricks ML Engineer (Professional) certified More ❯
mentoring and managing data science teams. Deep knowledge of media measurement techniques, such as media mix modelling. Experience with advanced AI techniques, including NLP, GenAI, and CausalAI. Familiarity with MLFlow, API design (FastAPI), and dashboard building (Dash). If this role looks of interest, reach out to Joseph Gregory More ❯
mentoring and managing data science teams. Deep knowledge of media measurement techniques, such as media mix modelling. Experience with advanced AI techniques, including NLP, GenAI, and CausalAI. Familiarity with MLFlow, API design (FastAPI), and dashboard building (Dash). If this role looks of interest, reach out to Joseph Gregory More ❯
mentoring and managing data science teams. Deep knowledge of media measurement techniques, such as media mix modelling. Experience with advanced AI techniques, including NLP, GenAI, and CausalAI. Familiarity with MLFlow, API design (FastAPI), and dashboard building (Dash). If this role looks of interest, reach out to Joseph Gregory More ❯
engineering practices. Key competencies include: Databricks Platform Expertise : Proven experience designing and delivering data solutions using Databricks on Azure or AWS. Databricks Components : Proficient in Delta Lake, Unity Catalog, MLflow, and other core Databricks tools. Programming & Query Languages : Strong skills in SQL and Apache Spark (Scala or Python). Relational Databases : Experience with on-premises and cloud-based SQL databases. More ❯
Coding Skills: Proficient in Python, SQL, and one of Pytorch, Tensorflow, Scikit-learn, with daily experience in writing, debugging, and optimising code. ML Ops Knowledge: Familiarity with tools like MLflow, Kubeflow, or Vertex AI, and experience implementing CI/CD pipelines for machine learning. Understanding of Financial Services: Financial Services understanding is a plus, ideally in a lending environment. Strong More ❯
on-premise and cloud environments to handle text and audio data processing loads for ML models Deploy NLP models in cloud environments (AWS SageMaker) through Jenkins Design and implement MLflow and other ML Ops applications to streamline ML workflows which adhere to strict data privacy and residency guidelines Communicate your work throughout the team and related departments Mentor and guide More ❯