London, England, United Kingdom Hybrid / WFH Options
Project Recruit
understanding of Infrastructure (Azure, On-premises, and other cloud technologies) Knowledge of Cloud Platforms, especially Azure Proficiency in R programming HPC Skills: experience with Slurm, including installation and configuration Experience with Python installation and configuration on Linux systems Deep understanding of Biostatistics and Life Science domain, especially Clinical Basic More ❯
Will Do Enhance our CPU, GPU, HPC, and cloud infrastructure Implement upgrades, patching, and system enhancements Provide expertise with technologies such as Linux, CUDA, SLURM, Python etc. Innovate to maintain the highest standards for our technology stack Drive IT solutions that align with our business objectives Research and evaluate More ❯
Cambridge, England, United Kingdom Hybrid / WFH Options
ONE NUCLEUS
as part of a broad collaborative project - Familiarity with cloud technologies (Docker, Kubernetes) - Experience with high-performance computing environments and job schedulers such as SLURM Apply now! Benefits and Contract Information - Financial incentives: depending on circumstances, monthly family/marriage allowance of £272 monthly child allowance of £328 per More ❯
languages such as Python, Julia. Proficient in modern data science tools stacks (Jupyter, pandas, numpy, sklearn) with machine learning experience. Good understanding of using Slurm or similar parallel computing tools. Bachelor's or Master's degree in Computer Science, Mathematics, Statistics, or related STEM field from a top-ranked More ❯
Hinxton, England, United Kingdom Hybrid / WFH Options
GenomeWeb LLC
as part of a broad collaborative project - Familiarity with cloud technologies (Docker, Kubernetes) - Experience with high-performance computing environments and job schedulers such as SLURM How to Apply To apply, please submit a covering letter and CV via our online system. Applications will close on 27/06/ More ❯
technical teams Required Skills & Knowledge: Strong understanding of Infrastructure (Azure, On-premises, Cloud) Proficiency in R and Python environments Experience with HPC systems (e.g., Slurm) Basic SAS knowledge Deep understanding of Life Science and Biostatics Desirable: Background in Life Sciences or Clinical Data Broad knowledge of infrastructure solutions If More ❯
technical teams Required Skills & Knowledge: Strong understanding of Infrastructure (Azure, On-premises, Cloud) Proficiency in R and Python environments Experience with HPC systems (e.g., Slurm) Basic SAS knowledge Deep understanding of Life Science and Biostatics Desirable: Background in Life Sciences or Clinical Data Broad knowledge of infrastructure solutions If More ❯
with ML engineers Demonstrates competence and rigor in software development. Has experience working with scientific computing/lab environments (e.g. has used or administered SLURM) Conversant with cloud computing; able to provide requirements to DevOps engineers ABOUT IAMBIC THERAPEUTICS Iambic is a clinical-stage life-science and technology company More ❯
Architect based in Hertfordshire or London for an initial 6-month contract. Note: *** INSIDE IR35 *** The candidate should have a strong understanding of HPC (Slurm), including installation and configuration. Main Responsibilities: Contribute to the development and understanding of various architectural levels. Key Skills: Linux Azure Cloud HPC Python Posit More ❯
This company is on the hunt for HPC Engineers to power their 25 Petabyte system Sound good? Well there's more! Imagine working with Slurm clusters and GPFS storage, all while being an integral part of groundbreaking translational research. You will work in adynamic team of five, where your More ❯
CI tools like GitHub or Bamboo. Willingness to engage in technical discussions and produce high-quality code. Enthusiasm to learn and grow. Knowledge of Slurm and HPC is a bonus. The role involves developing in Python within an SRE team, impacting a greenfield set of services that will enhance More ❯
software engineering skills. Proficiency in Python and related ML frameworks such as JAX, Pytorch and XLA/MLIR. Experience with distributed training infrastructures (Kubernetes, Slurm) and associated frameworks (Ray). Experience using large-scale distributed training strategies. Hands on experience on training large model at scale and having contributed More ❯
software engineering skills. Proficiency in Python and related ML frameworks such as JAX, Pytorch and XLA/MLIR. Experience with distributed training infrastructures (Kubernetes, Slurm) and associated frameworks (Ray). Experience using large-scale distributed training strategies. Hands on experience on training large model at scale. Hands on experience More ❯
a pivotal role in managing and optimising a large-scale infrastructure. Your expertise in Linux systems, along with experience in High-Performance Computing (HPC), Slurmworkload management, and advanced storage solutions, will be essential to ensuring smooth and efficient operations. You'll be working alongside some of the More ❯
ideas to the business Skills/Experience Systems Engineering experience in a high-availability & low-latency environment Knowledge of HPC cluster schedulers, such as Slurm, Grid engine, MOAB, PBS Strong experience with scripting/automation is highly preferred (Python, Ansible, Chef, Puppet) Exposure to CPU Chipsets is a plus More ❯
ideas to the business Skills/Experience Systems Engineering experience in a high-availability & low-latency environment Knowledge of HPC cluster schedulers, such as Slurm, Grid engine, MOAB, PBS Strong experience with scripting/automation is highly preferred (Python, Ansible, Chef, Puppet) Exposure to CPU Chipsets is a plus More ❯
ideas to the business Skills/Experience Systems Engineering experience in a high-availability & low-latency environment Knowledge of HPC cluster schedulers, such as Slurm, Grid engine, MOAB, PBS Strong experience with scripting/automation is highly preferred (Python, Ansible, Chef, Puppet) Exposure to CPU Chipsets is a plus More ❯
and resource fencing Linux tuning with experience around high throughput or high performance computing Bash, Shell or Python Salt, Chef or Ansible HPC Architecture Slurm or Grid engine orMOAB or PBS Containers and container orchestration You will be joining a progressive and exciting company committed to excellence. They offer More ❯
and resource fencing Linux tuning with experience around high throughput or high performance computing Bash, Shell or Python Salt, Chef or Ansible HPC Architecture Slurm or Grid engine orMOAB or PBS Containers and container orchestration You will be joining a progressive and exciting company committed to excellence. They offer More ❯
and resource fencing Linux tuning with experience around high throughput or high performance computing Bash, Shell or Python Salt, Chef or Ansible HPC Architecture Slurm or Grid engine orMOAB or PBS Containers and container orchestration You will be joining a progressive and exciting company committed to excellence. They offer More ❯
Linux specifically around high throughput or high performance computing · Proficiency in Programming Languages for Automation and Tooling. · Experience with HPC cluster schedulers, such as Slurm, Grid engine, MOAB, PBS, etc · Deep working knowledge of containers and container orchestration · Experience contributing to and collaborating on a shared code base · Experience More ❯
performance of models on accelerated computing (GPU, TPU, AI ASICs) clusters with high-speed networking. - Experience scaling model training and inference using technologies like Slurm, ParallelCluster, Amazon SageMaker. - Experience in developing and deploying large scale machine learning or deep learning models and/or systems into production, including batch More ❯
Cambridge, Cambridgeshire, United Kingdom Hybrid / WFH Options
Arm Limited
environments, particularly in performance-sensitive contexts General experience working in compute or storage-heavy environments Exposure to basic job scheduling systems (e.g., LSF, Jenkins, SLURM) Familiarity with monitoring tools like Prometheus, Grafana, or Linux-based telemetry Familiarity with profiling tools Ability to troubleshoot issues related to CPU, memory, I More ❯
access and storage, ensuring efficient I/O capabilities for data science workflows Utilize orchestration frameworks (e.g., Nextflow, Snakemake) and high-performance computing (e.g., SLURM, AWS Batch) Write efficient and optimized SQL queries for data manipulation and analysis Build and maintain GUIs, dashboards, and website front-ends for data More ❯