Senior MLops (Full Stack) Engineer | London | Foundation Models
Senior MLops (Full Stack) Engineer | London | Foundation Models What you’ll do
Tackle the biggest challenge in AI – Be part of the mission to bend the curve on compute costs, energy waste, and emissions in the LLM arms race. Our optimiser is redefining how the world trains and serves large models.
Work on the frontier – You’ll engineer the infrastructure behind cutting-edge AI systems — pushing the boundaries of speed, efficiency, and scale with a team that lives at the intersection of ML, systems, and optimisation research.
High-impact, high-autonomy – We’re a lean, expert-led team where your work ships fast, matters deeply, and scales globally. Expect ownership, speed, and the freedom to build without bureaucratic drag.
Foundation model as infra – Our optimiser is itself a foundation model. You’ll help serve, adapt, and scale it in the wild — an opportunity few engineers will ever get.
Equity that means something – You’re not late to the party. Join at a time when your equity still reflects the upside you help create.u
- Build and maintain APIs (FastAPI or similar) to serve ML models
- Design and manage robust ML infrastructure using Kubernetes, Docker, and Terraform
- Deploy machine learning models into production and optimize them for performance
- Collaborate with ML teams to streamline training, deployment, and monitoring
- Build internal tools and dashboards (e.g., in React or Vue) for analytics and observability
- Own CI/CD pipelines and drive infrastructure automation
- 5+ years’ experience in backend or infrastructure-focused engineering roles
- Strong Python and API development skills (FastAPI, Flask, etc.)
- Proven experience with model deployment, containerization, and orchestration (K8s, Docker)
- Infrastructure-as-code experience (Terraform, Helm, etc.)
- Familiarity with cloud platforms like AWS, GCP, or Azure
- Bonus: Frontend experience (React, Vue.js) for building internal tools
Tackle the biggest challenge in AI – Be part of the mission to bend the curve on compute costs, energy waste, and emissions in the LLM arms race. Our optimiser is redefining how the world trains and serves large models.
Work on the frontier – You’ll engineer the infrastructure behind cutting-edge AI systems — pushing the boundaries of speed, efficiency, and scale with a team that lives at the intersection of ML, systems, and optimisation research.
High-impact, high-autonomy – We’re a lean, expert-led team where your work ships fast, matters deeply, and scales globally. Expect ownership, speed, and the freedom to build without bureaucratic drag.
Foundation model as infra – Our optimiser is itself a foundation model. You’ll help serve, adapt, and scale it in the wild — an opportunity few engineers will ever get.
Equity that means something – You’re not late to the party. Join at a time when your equity still reflects the upside you help create.u
- Company
- SoCode Recruitment
- Location
- London, UK
- Posted
- Company
- SoCode Recruitment
- Location
- London, UK
- Posted