Permanent CUDA Jobs in London

4 of 4 Permanent CUDA Jobs in London

Machine Learning Engineer II

london, south east england, united kingdom
Hybrid/Remote Options
Hudl
how to run video encoding, decoding, and transmission at scale (e.g. HLS, WebRTC, and FFMPEG). Accelerator experience. You've developed GPU kernels and/or ML compilers (e.g., CUDA, OpenCL, TensorRT Plugins, MLIR, TVM, etc). Real-time experience. You've optimized systems to meet strict utilization and latency requirements with tools such as Nvidia NSight. Embedded experience. More ❯
Posted:

Senior Machine Learning Engineer

london, south east england, united kingdom
Hybrid/Remote Options
Synthesia
identify high-impact initiatives and push the boundaries of model performance. You will work on re-implementing models in an efficient manner by using PyTorch and underlying technologies like CUDA/Triton, Torch compilation, etc. This would include: Evaluating, profiling and optimising compute resource usage (e.g., Hopper & Blackwell GPUs) for cost and time efficiency at training and inference times … Developing customized efficient solutions for inference pipelines (CUDA/Triton kernels) as well as Introducing or enhancing tooling for achieving optimal computational performance (e.g. DL compilers, ONNX, TensorRT) Driving the adoption of best practices for large-model training, including checkpointing, gradient accumulation, and memory optimisation among others Introducing or enhancing tooling for distributed training, performance monitoring, and logging (e.g. … background in Computer Science/Engineering and 3+ years of industry experience. (PhD preferred) You have worked on optimising large models for over 2 years You have experience developing CUDA/Triton kernels and optimizing models with DL compilers (torch.compile) You have great coding skills in Python and C++ and you care about writing clean, and efficient code You More ❯
Posted:

CUDA Kernel Optimizer

london, south east england, united kingdom
Hybrid/Remote Options
Mercor
Role Overview Mercor is engaging advanced CUDA experts who specialize in GPU kernel optimization, performance profiling, and numerical efficiency. These professionals possess a deep mental model of how modern GPU architectures execute deep learning workloads. They are comfortable translating algorithmic concepts into finely tuned kernels that maximize throughput while maintaining correctness and reproducibility, 2) Key Responsibilities Develop, tune, and … benchmark CUDA kernels for tensor and operator workloads. Optimize for occupancy, memory coalescing, instruction-level parallelism, and warp scheduling. Profile and diagnose performance bottlenecks using Nsight Systems, Nsight Compute, and comparable tools. Report performance metrics, analyze speedups, and propose architectural improvements. Collaborate asynchronously with PyTorch Operator Specialists to integrate kernels into production frameworks. Produce well-documented, reproducible benchmarks and … performance write-ups. 3) Ideal Qualifications Deep expertise in CUDA programming, GPU architecture, and memory optimization. Proven ability to achieve quantifiable performance improvements across hardware generations. Proficiency with mixed precision, Tensor Core usage, and low-level numerical stability considerations. Familiarity with frameworks like PyTorch, TensorFlow, or Triton (not required but beneficial). Strong communication skills and independent problem-solving More ❯
Posted:

PyTorch Operator

london, south east england, united kingdom
Mercor
functions in C ATen. Build and validate Python bindings with correct gradient propagation and test coverage. Create "golden" reference implementations in eager mode for correctness validation. Collaborate asynchronously with CUDA or systems engineers who handle low-level kernel optimization. Profile, benchmark, and report performance trends at the operator and graph level. Document assumptions, APIs, and performance metrics for reproducibility. … plus. 4) More About the Opportunity Ideal for contractors who enjoy building clean, high-performance abstractions in deep learning frameworks. Work is asynchronous, flexible, and outcome-oriented. Collaborate with CUDA optimization specialists to integrate and validate kernels. Projects may involve primitives used in state-of-the-art AI models and benchmarks. 5) Compensation & Contract Terms Typical range More ❯
Posted:
CUDA
London
10th Percentile
£67,250
25th Percentile
£70,625
Median
£77,500
75th Percentile
£83,750
90th Percentile
£86,750