4 of 4 PyTorch Jobs in Scotland

Lead Data Scientist - Remote

Hiring Organisation
Exposed Solutions
Location
EH12, Hermiston, City of Edinburgh, United Kingdom
Employment Type
Permanent
Salary
£60000 - £80000/annum
productionisation · Deep experience in MLOps on Azure, including CI/CD, monitoring and scaling pipelines. · Strong coding skills in Python, with frameworks such as PyTorch, FastAPI and Azure CLI. ALL APPLICANTS MUST BE FREE TO WORK IN THE UK Exposed Solutions is acting as an employment agency to this client. ...

Senior Python Developer

Hiring Organisation
Robert Walters
Location
Glasgow, Lanarkshire, Scotland, United Kingdom
Employment Type
Temporary
Salary
Salary negotiable
architectures, and outline migration stages, translating requirements into actionable Python implementations Build reliable ML-driven tools and scripts in Python (using frameworks like TensorFlow, PyTorch, or scikit-learn) to optimize workload automation, predict placement needs, and handle legacy-modern tech integrations Conduct code reviews, implement CI/CD pipelines with ...

Systems Research Engineer

Hiring Organisation
European Tech Recruit
Location
Edinburgh, Scotland, United Kingdom
large-scale inference pipelines, specifically focusing on KV cache management and heterogeneous memory scheduling. AI Serving: Optimising high-throughput frameworks (vLLM, Ray Serve, PyTorch Distributed) to ensure low-latency, multi-tenant performance. Research Leadership: Contributing to top-tier venues (OSDI, NSDI, EuroSys, MLSys) and driving those innovations into real-world ...

AI Systems Research Engineer

Hiring Organisation
Project People
Location
Edinburgh, Scotland, United Kingdom
data pipelines, focusing on KV cache management, heterogeneous memory scheduling, and high-throughput inference serving using frameworks like vLLM, Ray Serve, and modern PyTorch Distributed systems. Scalable Model Serving Infrastructure : Develop and evaluate frameworks that enable efficient multi-tenant, low-latency, and fault-tolerant AI serving across distributed environments. Research … systems architecture, Inference serving, and AI Infrastructure. Hands-on experience with LLM Inference and LLM serving frameworks (e.g., vLLM, Ray Serve, TensorRT-LLM, TGI, PyTorch) and distributed KV cache optimization. Familiarity with GPU and how they execute LLMs Solid grounding in systems research methodology, distributed algorithms, and profiling tools. Proficiency ...