Remote Ray Job Vacancies

26 to 29 of 29 Remote Ray Jobs

Senior Software Engineer, ML Ops

London, United Kingdom
Hybrid / WFH Options
Merantix
experiments Experience with ML model monitoring systems Experience with ML training and data pipelines and working with distributed systems Proficiency with modern deep learning libraries and frameworks (PyTorch, Lightning, Ray) Preferred Qualifications Experience owning a product from development through monitoring and incident response Knowledge of the design, manufacturing, AEC, or media & entertainment industries Experience with Autodesk or similar products (CAD More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Lead DevOps Engineer

London, England, United Kingdom
Hybrid / WFH Options
Sprout.ai LTD
features which deliver AI capabilities to some of the biggest names in the insurance industry. We are developing a modern real-time ML platform using technologies like Python, PyTorch, Ray, k8s (helm + flux), Terraform, Postgres and Flink on AWS. We are very big fans of Infrastructure-as-Code and enjoy Agile practices. As a team, we're driven by More ❯
Posted:

Junior Data Scientist

London, England, United Kingdom
Hybrid / WFH Options
Artefact
explore techniques like time-series forecasting, clustering, or Bayesian inference. Orchestration and Parallelisation : Manage workflows with tools like Metaflow, MLFlow, AirFlow, or DVC; utilise parallelisation frameworks like PySpark or Ray for efficient model processing. Exposure to cloud platforms (AWS, Azure, GCP) Why you should join us Artefact is revolutionizing marketing: join us to build the future of marketing Progress: every More ❯
Posted:

Staff Software Engineer, Simulation ML Infrastructure

London, England, United Kingdom
Hybrid / WFH Options
Waymo
training, deploying, and optimizing large-scale machine learning systems from data to model. Solid experience in the development and optimization of machine learning infrastructure tools like DeepSpeed, PyTorch, TensorFlow, Ray, or similar frameworks. Expertise in distributed training techniques, including gradient sharding and optimization strategies for scaling large models across ML accelerator profiling tools to uncover performance bottlenecks. Familiarity with custom More ❯
Posted: