prompt engineering (e.g., GPT, BERT, T5 family). Familiarity with on-device or edge-AI deployments (e.g., TensorFlow Lite, ONNX, mobile/embedded inference). Knowledge of MLOps tooling (MLflow, Weights & Biases, Kubeflow, or similar) for experiment tracking and model governance. Open-source contributions or published papers in top-tier AI/ML conferences (NeurIPS, ICML, CVPR, ACL, etc.). More ❯
of ML algorithms , NLP , deep learning , and statistical methods. Experience with Docker, Kubernetes , and cloud platforms like AWS/Azure/GCP . Hands-on experience with MLOps tools (MLflow, SageMaker, Kubeflow, etc.) and version control systems. Strong knowledge of APIs, microservices architecture, and CI/CD pipelines. Proven experience in leading teams, managing stakeholders, and delivering end-to-end More ❯
engineering concepts and best practices (e.g., versioning, testing, CI/CD, API design, MLOps) Building machine learning models and pipelines in Python, using common libraries and frameworks (e.g., TensorFlow, MLFlow) Distributed computing frameworks (e.g., Spark, Dask) Cloud platforms (e.g., AWS, Azure, GCP) and HP computing Containerization and orchestration (Docker, Kubernetes) Strong problem-solving skills and the ability to analyse issues More ❯
warehousing solutions (Snowflake, BigQuery, Redshift) Experience with cloud platforms (AWS, Azure, GCP) and their ML and AI services (SageMaker, Azure ML, Vertex AI) Knowledge of MLOps tools including Docker, MLflow, Kubeflow, or similar platforms Experience with version control (Git) and collaborative development practices Excellent analytical thinking and problem-solving abilities Strong communication skills with ability to explain technical concepts to More ❯
attention to detail. Nice to Have Experience with ML and/or computer vision frameworks like PyTorch, Numpy or OpenCV. Knowledge of ML model serving infrastructure (TensorFlow Serving, TorchServe, MLflow). Knowledge of WebGL, Canvas API, or other graphics programming technologies. Familiarity with big data technologies (Kafka, Spark, Hadoop) and data engineering practices. Background in computer graphics, media processing, or More ❯
ML Engineering, DevOps, or Data Engineering with exposure to ML lifecycle. Hands-on experience building or maintaining ML workflows or pipelines. Strong Python skills and experience with tools like MLflow, Scikit-learn, or PyTorch. Familiarity with cloud platforms (especially Azure , AWS desirable). Good understanding of containerisation (Docker) and orchestration (e.g. Kubernetes). Exposure to CI/CD tools (GitHub More ❯
Python and associated ML/DS libraries (scikit-learn, numpy, LightlGBM, Pandas, LangChain/LangGraph TensorFlow, etc ) PySpark AWS cloud infrastructure: EMR, ECS, Athena, etc. MLOps: Terraform, Docker, Airflow, MLFlow More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for-1 share purchase plans, an EV Scheme to further reduce carbon emissions More ❯
Python and associated ML/DS libraries (scikit-learn, numpy, LightlGBM, Pandas, LangChain/LangGraph, , TensorFlow, etc...) PySpark AWS cloud infrastructure: EMR, ECS, Athena, etc. MLOps: Terraform, Docker, Airflow, MLFlow More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for-1 share purchase plans, an EV Scheme to further reduce carbon emissions More ❯
Lancaster, Lancashire, United Kingdom Hybrid / WFH Options
Galaxy Systems
with PyTorch, TensorFlow, Scikit-learn, and transformer-based models. Practical knowledge of LLM integration (e.g., GPT, Claude) and RAG architecture. Experience with ML lifecycle management tools like AWS SageMaker, MLflow, or Databricks. Working knowledge of CUDA, Nvidia GPUs, and distributed training. Experience with AWS services (S3, Lambda, EC2, SageMaker, Bedrock, etc.). Desired: Experience deploying models as APIs/microservices More ❯
years experience in AI/ML roles or relevant hands-on projects Nice to Have (Bonus) : NLP, Computer Vision, or Reinforcement Learning experience Knowledge of MLOps tools (MLflow, Kubeflow, etc.) Familiarity with SQL or Big Data tools (e.g., Spark) ️ Please apply only if you meet the skill and experience criteria. This role is open and hiring now don't delay. More ❯
or Azure) Ability to work independently and communicate effectively in a remote team Bonus Points Experience with Hugging Face Transformers , LangChain , or RAG pipelines Knowledge of MLOps tools (e.g., MLflow, Weights & Biases, Docker, Kubernetes) Exposure to data engineering or DevOps practices Contributions to open-source AI projects or research publications What We Offer Fully remote working A collaborative and inclusive More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
or Azure) Ability to work independently and communicate effectively in a remote team Bonus Points Experience with Hugging Face Transformers , LangChain , or RAG pipelines Knowledge of MLOps tools (e.g. MLflow, Weights & Biases, Docker, Kubernetes) Exposure to data engineering or DevOps practices Contributions to open-source AI projects or research publications What We Offer Fully remote working A collaborative and inclusive More ❯
related B2C environments Strong programming skills in Python, with experience using libraries like scikit-learn, XGBoost, and pandas Practical experience in MLOps or strong knowledge of model deployment (e.g. MLflow, Airflow, Docker, Kubernetes, model monitoring tools) Familiarity with cloud environments (AWS, GCP, or Azure) and data pipelines Excellent communication skills—able to explain technical work to non-technical stakeholders and More ❯
related B2C environments Strong programming skills in Python, with experience using libraries like scikit-learn, XGBoost, and pandas Practical experience in MLOps or strong knowledge of model deployment (e.g. MLflow, Airflow, Docker, Kubernetes, model monitoring tools) Familiarity with cloud environments (AWS, GCP, or Azure) and data pipelines Excellent communication skills—able to explain technical work to non-technical stakeholders and More ❯
Skills: Experience using R and NLP or deep learning techniques (e.g. TF-IDF, word embeddings, CNNs, RNNs). Familiarity with Generative AI and prompt engineering. Experience with Azure Databricks, MLflow, Azure ML services, Docker, Kubernetes. Exposure to Agile development environments and software engineering best practices. Experience working in large or complex organisations or regulated industries. Strong working knowledge of Excel More ❯
Skills: Experience using R and NLP or deep learning techniques (e.g. TF-IDF, word embeddings, CNNs, RNNs). Familiarity with Generative AI and prompt engineering. Experience with Azure Databricks, MLflow, Azure ML services, Docker, Kubernetes. Exposure to Agile development environments and software engineering best practices. Experience working in large or complex organisations or regulated industries. Strong working knowledge of Excel More ❯
Python-based ML code (scikit-learn, TensorFlow, PyTorch, etc.). Strong ownership mindset and a collaborative attitude. Nice to Have Experience with model versioning and ML serving frameworks (e.g., MLflow, Seldon, Triton). Understanding of data privacy/security implications in model and data pipelines. Experience working in cross-functional teams with data scientists and product owners. More ❯
design discussions, and performance tuning Requirements: 3+ years of experience in a Machine Learning Engineer or similar role Proficiency in Python , ML frameworks (TensorFlow, PyTorch), and deployment tools (Docker, MLflow, etc.) Experience building scalable ML pipelines in cloud environments (AWS, GCP or Azure) Familiarity with energy systems, smart metering, or IoT data is a significant bonus Bachelors or Masters degree More ❯
and Gurobi. Other programming languages are a plus. Solid experience with SQL, data engineering, and cloud-based tools (AWS preferred), as well as version control (Git), experiment tracking (e.g. MLflow), and containerisation (e.g. Docker). Familiarity with CI/CD tools (e.g. GitHub Actions), model/data versioning (e.g. DVC), and orchestration frameworks (e.g. Airflow, Dagster). Skilled in testing More ❯
TensorFlow) with a strong grounding in evaluating NLP models using classification and ranking metrics, and experience running A/B or offline benchmarks. Proficient with MLOps and training infrastructure (MLflow, Kubeflow, Airflow), including CI/CD, hyperparameter tuning, and model versioning. Strong social media data extraction and scraping skills at scale (Twitter v2, Reddit, Discord, Telegram, Scrapy, Playwright). Experience More ❯
East London, London, United Kingdom Hybrid / WFH Options
Talent Hero Ltd
and data teams Optimise model performance for accuracy, speed, and cost Run experiments, A/B tests, and validate approaches statistically Use tools like Python, TensorFlow, PyTorch, Scikit-learn, MLflow, SQL, Docker, AWS/GCP What Youll Need: Bachelors degree in Computer Science, AI, Machine Learning, or related field 1+ year of experience in a Machine Learning Engineer or similar More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Talent Hero Ltd
and data teams Optimise model performance for accuracy, speed, and cost Run experiments, A/B tests, and validate approaches statistically Use tools like Python, TensorFlow, PyTorch, Scikit-learn, MLflow, SQL, Docker, AWS/GCP What Youll Need: Bachelors degree in Computer Science, AI, Machine Learning, or related field 1+ year of experience in a Machine Learning Engineer or similar More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
Talent Hero Ltd
and data teams Optimise model performance for accuracy, speed, and cost Run experiments, A/B tests, and validate approaches statistically Use tools like Python, TensorFlow, PyTorch, Scikit-learn, MLflow, SQL, Docker, AWS/GCP What Youll Need: Bachelors degree in Computer Science, AI, Machine Learning, or related field 1+ year of experience in a Machine Learning Engineer or similar More ❯
data modeling, data warehousing, data integration, and data governance. Databricks Expertise: They have hands-on experience with the Databricks platform, including its various components such as Spark, Delta Lake, MLflow, and Databricks SQL. They are proficient in using Databricks for various data engineering and data science tasks. Cloud Platform Proficiency: They are familiar with cloud platforms like AWS, Azure, or More ❯