dashboards Deploying data science solutions on a cloud platform; Azure ML and MSFT Certificate are highly desirable MLOps is highly desirable, e.g. CI/CD, Feature Store, drift monitoring, MLflow, DVC, Docker, Kubernetes Software development experience is desirable Algorithm design experience is desirable Data architecture knowledge is desirable API design and deployment experience is desirable Big data (e.g. Spark) experience More ❯
Senior Data Engineer - (Azure/Databricks) page is loaded Senior Data Engineer - (Azure/Databricks) Apply locations London - Scalpel time type Full time posted on Posted 15 Days Ago job requisition id REQ05851 This is your opportunity to join AXIS More ❯
Lead Machine Learning Engineer - LLMs - Ramboll Tech At Ramboll Tech, we believe innovation thrives in diverse, supportive environments where everyone can contribute their best ideas. As a Lead Machine Learning Engineer, you will create cutting-edge AI solutions, mentor others More ❯
The Machine Learning (ML) Practice team is a highly specialized customer-facing ML team at Databricks facing an increasing demand for Large Language Model (LLM)-based solutions. We deliver professional services engagements to help our customers build, scale, and optimize More ❯
as they scale their team and client base. Key Responsibilities: Architect and implement end-to-end, scalable data and AI solutions using the Databricks Lakehouse (Delta Lake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the adoption of Lakehouse architecture (bronze/silver/gold layers) to … governance, security, and compliance, including Unity Catalog. Excellent communication, leadership, and problem-solving skills. Desirable: Databricks certifications (e.g., Data Engineer Associate/Professional or Solutions Architect). Familiarity with MLflow, dbt, and BI tools such as Power BI or Tableau. Exposure to MLOps practices and deploying ML models within Databricks. Experience working within Agile and DevOps-driven delivery environments. More ❯
as they scale their team and client base. Key Responsibilities: Architect and implement end-to-end, scalable data and AI solutions using the Databricks Lakehouse (Delta Lake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the adoption of Lakehouse architecture (bronze/silver/gold layers) to … data governance, security, and compliance, including Unity Catalog. Excellent communication, leadership, and problem-solving skills. Databricks certifications (e.g., Data Engineer Associate/Professional or Solutions Architect). Familiarity with MLflow, dbt, and BI tools such as Power BI or Tableau. Exposure to MLOps practices and deploying ML models within Databricks. Experience working within Agile and DevOps-driven delivery environments. Seniority More ❯
with our AI research team to streamline the transition of models from research to production within the Ultralytics HUB ecosystem. Managing our experiment tracking and versioning using tools like MLflow and DVC. Your work will be critical to ensuring that our state-of-the-art models are accessible, reliable, and performant for our global user base. 🛠️ Skills and Experience 5+ … tools such as Terraform or Ansible. Familiarity with GPU acceleration using CUDA and model optimization for inference. Knowledge of MLOps tools for experiment tracking, and model serving such as MLflow, Kubeflow, or Weights & Biases. Excellent problem-solving skills and the ability to perform in a fast-paced, high-intensity environment. 🌟 Cultural Fit - Intensity Required Ultralytics is a high-performance environment More ❯
with our AI research team to streamline the transition of models from research to production within the Ultralytics HUB ecosystem. Managing our experiment tracking and versioning using tools like MLflow and DVC. Your work will be critical to ensuring that our state-of-the-art models are accessible, reliable, and performant for our global user base. 🛠️ Skills and Experience 5+ … tools such as Terraform or Ansible. Familiarity with GPU acceleration using CUDA and model optimization for inference. Knowledge of MLOps tools for experiment tracking, and model serving such as MLflow, Kubeflow, or Weights & Biases. Excellent problem-solving skills and the ability to perform in a fast-paced, high-intensity environment. 🌟 Cultural Fit - Intensity Required Ultralytics is a high-performance environment More ❯
with our AI research team to streamline the transition of models from research to production within the Ultralytics HUB ecosystem. Managing our experiment tracking and versioning using tools like MLflow and DVC. Your work will be critical to ensuring that our state-of-the-art models are accessible, reliable, and performant for our global user base. ️ Skills and Experience 5+ … tools such as Terraform or Ansible. Familiarity with GPU acceleration using CUDA and model optimization for inference. Knowledge of MLOps tools for experiment tracking, and model serving such as MLflow, Kubeflow, or Weights & Biases. Excellent problem-solving skills and the ability to perform in a fast-paced, high-intensity environment. Cultural Fit - Intensity Required Ultralytics is a high-performance environment More ❯
/experience Hands-on data science expertise with code-based model development e.g. R, Python Strong knowledge of deploying end-to-end machine learning models in Databricks utilizing Pyspark, MLflow and workflows Strong knowledge of data platforms and tools, including Hadoop, Spark, SQL, and NoSQL databases Communicate algorithmic solutions in a clear, understandable way. Leverage data visualization techniques and tools … and ETL processes is a plus Good knowledge of ML ops principles and best practices to deploy, monitor and maintain machine learning models in production Familiarity with Git and MLflow for managing and tracking model versions Experience with Kafka is a big bonus Experience with cloud-based data platforms such as AWS or Google Cloud Platform. Proven track record of More ❯
like BERT or GPT. Advanced proficiency in Python, including PyTorch, Hugging Face Transformers, Pandas, and scikit-learn. Strong understanding of cloud services (preferably AWS) and experience with Docker, Kubernetes, MLFlow, Kubeflow, or similar MLOps tools. 📩 Interested? Apply below or email me at mmatysik@trg-uk.com. More ❯
like BERT or GPT. Advanced proficiency in Python, including PyTorch, Hugging Face Transformers, Pandas, and scikit-learn. Strong understanding of cloud services (preferably AWS) and experience with Docker, Kubernetes, MLFlow, Kubeflow, or similar MLOps tools. Interested? Apply below or email me at mmatysik@trg-uk.com. More ❯
like BERT or GPT. Advanced proficiency in Python, including PyTorch, Hugging Face Transformers, Pandas, and scikit-learn. Strong understanding of cloud services (preferably AWS) and experience with Docker, Kubernetes, MLFlow, Kubeflow, or similar MLOps tools. 📩 Interested? Apply below or email me at mmatysik@trg-uk.com. More ❯
hands-on application in a risk, compliance or security-focused role. Strong proficiency in Python and statistical analysis. Familiarity with LLMs, ML pipeline management and AI lifecycle tools (e.g., MLflow, ModelOps platforms). Excellent communication and documentation skills for technical and non-technical stakeholders. Bachelor’s or Master’s degree in Machine Learning, AI, Computer Science, Statistics, Mathematics or a More ❯
training data pipelines, including data gathering, cleaning, augmentation, labeling, and managing vector databases for large-scale RAG workflows. Possess skills in model deployment, monitoring, versioning, and continuous improvement frameworks (MLflow, AWS SageMaker Model Monitor), ensuring models meet scalability, latency, and operational performance requirements. Have experience with deep learning frameworks (TensorFlow, PyTorch), AWS SageMaker, Bedrock, Lambda, and familiarity with Azure AI More ❯
London, England, United Kingdom Hybrid / WFH Options
Replika
in DevOps, cloud infrastructure, or site reliability engineering. Strong expertise in multi-cloud and hybrid infrastructure including AWS, GCP, and on-premises environments. Experience with MLOps tooling such as MLFlow, Kubeflow, DataRobot, or similar platforms for ML lifecycle management. Experience with containerization and orchestration (Docker, Kubernetes) specifically for ML workloads and GPU clusters. Deep understanding of CI/CD pipelines More ❯
and model performance. Cloud and MLOps for Optimization Models: Familiarity with deploying and managing optimization models on cloud platforms (AWS, GCP, Azure) and employing MLOps practices (with tools like MLFlow, BentoML) to ensure efficient lifecycle management of optimization solutions. Ethical AI and Continuous Learning: A robust understanding of AI ethics and privacy considerations, especially relevant to optimization, coupled with a More ❯
and ETL processes Good knowledge of ML ops principles and best practices to deploy, monitor and maintain machine learning models in production Familiarity with Git CI/CD and MLflow for managing and tracking code deployment or model versions Experience with cloud-based data platforms such as AWS or Google Cloud Platform Nice to have: Experience with Kafka Proven track More ❯
We are Dufrain, a pure-play data consultancy specialising in helping businesses unlock the true value of their data by providing market-leading data solutions and services which includes developing strategies for AI readiness, improving data literacy and culture, enhancing More ❯
What You'll Do - Design and build an end-to-end MLOps pipeline using AWS , with a strong focus on SageMaker for training, deployment, and hosting. - Integrate and operationalize MLflow for model versioning, experiment tracking, and reproducibility. - Architect and implement a feature store strategy for consistent, discoverable, and reusable features across training and inference environments (e.g., using SageMaker Feature Store … years of experience in MLOps, DevOps, or ML infrastructure roles. - Deep familiarity with AWS services , especially SageMaker , S3, Lambda, CloudWatch, IAM, and optionally Glue or Athena. - Strong experience with MLflow , experiment tracking , and model versioning. - Proven experience setting up and managing a feature store , and driving best practices for feature engineering in production systems . - Proficiency in model testing strategies More ❯
techniques. Have experience with Cloud infrastructure (ideally AWS), DevOps technologies such as Docker or Terraform and CI/CD processes and tools. Have previously worked with MLOps tools like MLFlow and Airflow, or on common problems such as model and API monitoring, data drift and validation, autoscaling, access permissions Have previously worked with monitoring tools such as New Relic or … associated ML/DS libraries (scikit-learn, numpy, pandas, LightGBM, LangChain/LangGraph, TensorFlow, etc ) PySpark AWS cloud infrastructure: EMR, ECS, ECR, Athena, etc. MLOps: Terraform, Docker, Spacelift, Airflow, MLFlow Monitoring: New Relic CI/CD: Jenkins, Github Actions More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for-1 share purchase More ❯
the wider data community. Skills/Attributes/Experience Profile: Proven experience deploying and managing ML models in production, in Azure Databricks. Tech stack: Databricks, Unity Catalog, Python, Git, MLFlow, Delta tables, Azure DevOps. Hands-on development experience using Python, particularly with TensorFlow, PyTorch, scikit-learn, boto3, and the Python Data Science stack (pandas, numpy, etc.). Strong analytical and More ❯
systems. Developing and managing data pipelines, including data gathering, cleaning, augmentation, labelling, and managing vector databases for RAG workflows. Model deployment, monitoring, versioning, and continuous improvement using frameworks like MLflow and AWS SageMaker. Experience with deep learning frameworks (TensorFlow, PyTorch), AWS SageMaker, Bedrock, Lambda; familiarity with Azure AI Foundry is a plus. Knowledge of software engineering best practices (version control More ❯
Better Placed Ltd - A Sunday Times Top 10 Employer!
containerization, and cloud deployment for large-scale models. Solid programming skills in Python and familiarity with machine learning frameworks like TensorFlow, PyTorch, Hugging Face Transformers, and MLOps tools (e.g., MLflow, Kubeflow). Strong analytical and problem-solving skills, with an aptitude for translating complex theoretical research into practical applications. Day to Day Conduct research and implementation on the development, training More ❯
Better Placed Ltd - A Sunday Times Top 10 Employer!
containerization, and cloud deployment for large-scale models. Solid programming skills in Python and familiarity with machine learning frameworks like TensorFlow, PyTorch, Hugging Face Transformers, and MLOps tools (e.g., MLflow, Kubeflow). Strong analytical and problem-solving skills, with an aptitude for translating complex theoretical research into practical applications. Day to Day Conduct research and implementation on the development, training More ❯