|
3 of 3 Permanent Distributed Computing Jobs in Chelmsford
chelmsford, east anglia, United Kingdom Hybrid / WFH Options Sure Exec Search
regular system updates, patches, and security enhancements across supported environments. • Stakeholder Collaboration : Work directly with research staff to understand technical needs and deliver practical computing solutions tailored to their work. • Infrastructure Design : Design, implement, and maintain cloud-native and containerised infrastructure, with a focus on ML/AI research … Develop and maintain CI/CD pipelines to optimise the deployment and integration of research software. Essential Skills/Experience : • Proven experience managing enterprise computing platforms in Kubernetes environments. • Strong Linux and Windows system administration expertise. • Familiarity with high-performance computing (HPC) clusters and distributed computing More ❯
chelmsford, east anglia, United Kingdom Hybrid / WFH Options SR2 | Socially Responsible Recruitment | Certified B Corporation™
stability, system visibility, and efficient resource usage Take ownership of cloud environments (primarily AWS) , ensuring scalable, secure, and cost-effective architecture Lead and develop distributed engineering teams across platform, infrastructure, and data Build and maintain robust internal tooling and services that enhance developer workflows Promote a culture of automation … CI/CD pipelines (e.g. GitHub Actions) Interest or experience in developer productivity tools and AI-assisted engineering Understanding of network systems, protocols, or distributed computing challenges What's on Offer 💚 Salary up to £125k + equity + bonus 100% remote from anywhere in the UK Unlimited holiday More ❯
chelmsford, east anglia, United Kingdom Flux Computing
Our work environment rewards innovation, speed, and bold thinking. The role We’re hiring Senior and Staff Software Engineers to build the high-performance computing infrastructure that powers our Optical Tensor Processing Units (OTPUs). This isn’t just about scaling models—it’s about rethinking how AI workloads … are executed at speed and scale. You’ll lead the design and implementation of software systems that run distributed, low-latency inference across clusters. You’ll work closely with hardware and ML teams to optimise every layer of the stack—from model representation and execution to data movement and … AI infrastructure at serious scale, we’d love to talk. Responsibilities Design and build high-performance systems for running AI/ML workloads across distributed compute clusters Optimise for ultra-low latency and real-time inference at scale—profiling, tuning, and rewriting critical systems as needed Identify and resolve More ❯
|
|