London, England, United Kingdom Hybrid / WFH Options
Moonvalley
role is central to our mission of training models exclusively on clean, high-quality data. You will lead the design of data ingestion pipelines, data annotations, and high-throughput, distributed systems that support large-scale data processing and curation. You’ll work closely with researchers, engineers, and infrastructure teams to ensure that our data pipeline is not just performant … validation, filtering, and labeling to ensure only clean, high-quality data flows through the pipeline. Collaborate with research to define data quality benchmarks. Optimize end-to-end performance across distributed data processing frameworks (e.g., Apache Spark, Ray, Airflow). Work with infrastructure teams to scale pipelines across thousands of GPUs. Work directly with the leadership on the data team … in managing and leading small teams of engineers. Expertise in Python, Spark, Airflow, or similar data frameworks. Understanding of modern infrastructure: Kubernetes, Terraform, object stores (e.g. S3, GCS), and distributedcomputing environments. Strong communication and leadership skills; you can bridge the gap between engineering and research. Skilled at balancing rapid, iterative delivery with a focus on long-term More ❯
About this Job Scopeworker's software engineers are developing a next generation, enterprise platform. We are looking for engineers who bring fresh ideas from all areas, including information retrieval, distributedcomputing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile. As a software engineer, you will work on a More ❯
Gloucester, Gloucestershire, South West, United Kingdom Hybrid / WFH Options
NSD
solutions and manage pipelines that transform diverse data sources into valuable insights for our client's National Security customers. You will collaborate with clients to solve complex challenges, utilising distributedcomputing techniques to handle large-scale, real-time, and unstructured data. Responsibilities include: Design and develop data pipelines, including ingestion, orchestration, and ETL processing (e.g., NiFi). Ensure More ❯
London, England, United Kingdom Hybrid / WFH Options
Searchability
solutions and manage pipelines that transform diverse data sources into valuable insights for our client’s National Security customers. You will collaborate with clients to solve complex challenges, utilising distributedcomputing techniques to handle large-scale, real-time, and unstructured data. Responsibilities include: Design and develop data pipelines, including ingestion, orchestration, and ETL processing (e.g., NiFi). Ensure More ❯
Gloucester, England, United Kingdom Hybrid / WFH Options
Searchability NS&D
solutions and manage pipelines that transform diverse data sources into valuable insights for our client’s National Security customers. You will collaborate with clients to solve complex challenges, utilising distributedcomputing techniques to handle large-scale, real-time, and unstructured data. Responsibilities include: Design and develop data pipelines, including ingestion, orchestration, and ETL processing (e.g., NiFi). Ensure More ❯
products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributedcomputing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. More ❯
products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributedcomputing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. More ❯
point identification skills, adept at uncovering customer needs Proficiency in coding, preferably with Python Experience with common data science libraries, such as PyTorch, TensorFlow, and scikit-learn Familiarity with distributedcomputing frameworks, including Spark, Ray, and Dask. Comfortable working in Linux environments, with hands-on experience in system navigation and troubleshooting Understanding of Kubernetes, Docker, and containerization, with More ❯
build, and maintain both compute and storage services that are continuously scaling to support the growth of the company. This role requires in-depth knowledge of Linux platforms and distributed storage systems, including performance optimization and capacity management. The candidate should ideally have experience with bare metal, hypervisors, containers, and public cloud platforms. Daily tasks may include hardware and … at scale through standardization, automation, testing, and in-depth monitoring. Enforce development standards for source control, testing, and continuous integration for infrastructure, OS, patches, and configuration management. Manage a distributed compute environment and several petabyte-scale storage systems. Install, manage, and monitor the Linux operating system (RHEL based). Troubleshoot hardware and software issues throughout the stack. Collaborate with … long-term objectives. Required Qualifications: 5+ years of experience in a technology infrastructure role, including Linux administration. Experience with configuration management tools such as Chef or Ansible. Experience with distributed storage and protocols (NFS, GPFS, WEKA). Proficiency in Python or other high-level languages (Go, Rust, etc.). Experience with modern software development practices: version control, agile development More ❯
has been created to ensure that we have a hyper-fast, resilient build system that optimizes tool use, scales with our needs, and continually maximises simulation workloads on our distributed compute grid. The role Our GPU & AI hardware teams need an industry-leading automated tool stack to produce our upcoming IP roadmap – so the ‘Hardware Tools & Flows’ team has … been created to ensure that we have a hyper-fast, resilient build system that optimizes tool use, scales with our needs, and continually maximises simulation workloads on our distributed compute grid. As a founding member of our team, you are joining us at a start of a revolution – with your technical expertise, innovation mindset, and ‘get stuff done’ attitude … or more of the following: Build systems (e.g. Bazel, Nextflow, FuseSoC) Hardware EDA tools (eg. simulation, linting, synthesis) Systemverilog, C/C++, Simulator DPI/VPI Containerization (eg. Docker) Distributed Compute, Orchestration Jenkins Automation Software templating, rendering Data Engineering/Data Science/Machine Learning Who We Are Imagination is a UK-based company that creates silicon and software More ❯
has been created to ensure that we have a hyper-fast, resilient build system that optimizes tool use, scales with our needs, and continually maximises simulation workloads on our distributed compute grid. As a founding member of our team, you are joining us at a start of a revolution – with your technical expertise, innovation mindset, and ‘get stuff done … or more of the following: Build systems (e.g. Bazel, Nextflow, FuseSoC) Hardware EDA tools (eg. simulation, linting, synthesis) Systemverilog, C/C++, Simulator DPI/VPI Containerization (eg. Docker) Distributed Compute, Orchestration Jenkins Automation Software templating, rendering Data Engineering/Data Science/Machine Learning Who we are Imagination is a UK-based company that creates silicon and software More ❯
London, England, United Kingdom Hybrid / WFH Options
Moonvalley
video models. This role is central to our mission of training models exclusively on clean, high-quality data. You'll develop data ingestion pipelines, captioning systems, and high-throughput, distributed architectures for large-scale data processing and curation. You’ll be responsible for solving some of the toughest challenges in data quality and model performance — from training and shipping … modal models. Experience managing large-scale datasets and pipelines in production. Fluency with Python, Spark, Airflow, or similar frameworks. Understanding of modern cloud infrastructure: Kubernetes, Terraform, S3/GCS, distributed compute. Comfortable operating in environments with ambiguity and evolving priorities. Nice to Haves: Experience working on foundational model training pipelines (image, video, or language). Experience with video-specific More ❯
daily → weekly → monthly roll-ups) and serves the results back to cinema managers in seconds. As volume soars, we need a backend-focused engineer to: Own and harden this distributed reporting engine. Design data pipelines that can ingest years of transactional data and still answer ad-hoc questions quickly. You will partner with—but not be accountable for—front … customers notice. Why INDY Rocks: Massive Visible Impact: Your optimizations ripple out to millions of moviegoers. Start-to-Finish Ownership: No layers of bureaucracy; ship, measure, iterate. Technically Spicy: Distributedcomputing meets real-world financial correctness. Tight-Knit Crew: Collaborate directly with founders and domain experts. #J-18808-Ljbffr More ❯
Heata is a groundbreaking green distributed compute network that uses the waste heat from compute to heat the water in people’s homes, helping to tackle climate change and the fuel poverty crisis simultaneously. Backed by British Gas and a number of other prominent investors, the company has a network of units installed in UK homes and is now … of an innovative clean-tech project, leveraging technical skills to make a positive environmental impact. It's ideal for someone passionate about green technology and highly skilled in modern computing and network technologies. We’ve developed a solid base in conjunction with SecureLinx ( https://securelinx.com/) and SUSE. We’re looking to build out and scale up … testing wherever possible to enable quick and secure repeatable deployments of services for our growing client base. Core Responsibilities: Platform Development: Lead the running and scale out of our distributed compute network, leveraging your experience with enterprise Linux environments, Dev-Ops, and Cloud-Ops. Automation and Scalability: Define and help implement automation processes to ensure quick and secure deployment More ❯
scale of AWS. We are looking for a Software Development Engineer to join the Amazon FSx for Windows team as we grow and innovate with Windows on AWS. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released … their cloud services. Key job responsibilities • Collaborate with cross-disciplinary Amazonians to conceive, design, and bring products and services to market. • Design and build innovative technologies in a large distributedcomputing environment and help lead fundamental changes in the industry. • Dive deep into technical challenges at the OS, storage, and networking layers • Work in an agile environment to … contribute to a fun and inclusive work environment A day in the life Our team of talented, passionate engineers use the latest in serverless and virtualization technologies to build distributed systems at AWS scale. We solve complex, varied problems to enable customers in a diverse set of businesses from AI/ML to EDA, Media & Entertainment to HPC, and More ❯
day-to-day Quant Systems is a global team that designs and maintains the firm’s largest compute infrastructure, which includes operating system platforms, software development tooling, high-performance computing, networking, and storage for research and trading. This engineer will have the opportunity to work on a wide variety of technology initiatives in a distributedcomputing environment More ❯
teams through high impact projects that use the latest data analytics technologies? Would you like a career path that enables you to progress with the rapid adoption of cloud computing? At Amazon Web Services, we're hiring highly technical cloud architect specialised in data analytics to collaborate with our customers and partners to derive business value from latest in … pre-sales visits, understanding customer requirements, creating consulting proposals and creating packaged data analytics service offerings. Delivery - Engagements include projects proving the use of AWS services to support new distributedcomputing solutions that often span private cloud and public cloud services. Engagements may include migration and modernisation of existing data applications and development of new data applications using … experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. More ❯
scale ML models across our global operations. You'll collaborate with leading researchers, hardware experts, and software engineers to build robust solutions that maximize the potential of GPU acceleration, distributedcomputing, and the latest open-source tools. Your work will influence our trading strategies by accelerating experimentation cycles that foster continuous innovation and refinement. This is a unique … at the intersection of advanced machine learning and trading, where your contributions will shape the future of IMC's technology and trading capabilities. Your Core Responsibilities: Develop large-scale distributed training pipelines to manage datasets and complex models Build and optimize low-latency inference pipelines, ensuring models deliver real-time predictions in production systems Develop libraries to improve the … Python, CUDA, or C++ Knowledge of machine learning frameworks such as PyTorch, TensorFlow, or JAX Proficiency in GPU programming for training and inference acceleration (e.g., CuDNN, TensorRT) Experience with distributed training for scaling ML workloads (e.g., Horovod, NCCL) Exposure to cloud platforms and orchestration tools A track record of contributing to open-source projects in machine learning, data science More ❯
London, England, United Kingdom Hybrid / WFH Options
Enertek Group
Enertek Group Type: Full-time | Leadership | Equity Available About Us We are building the future of compute. Our platform is a decentralized, enterprise-grade cloud network , delivering scalable, globally distributed GPU resources for AI training , high-performance gaming , and Web3 infrastructure . By democratizing access to GPU power, we are accelerating innovation in machine learning, blockchain applications, and immersive … the economics and accessibility of large-scale computing. The Role We are looking for a visionary Head of Engineering to lead our growing team of backend, blockchain, DevOps, and distributed systems engineers. You will set the technical roadmap, scale the team, and help us deliver a robust, secure, and performant decentralized GPU network. Key Responsibilities Technical Leadership: Own and … evolve the technical architecture of a globally distributed compute platform. Team Management: Hire, mentor, and grow high-performing engineering teams across backend, protocol, and infrastructure domains. Strategic Planning: Collaborate with product, research, and business teams to define and execute the technical vision. Scalability & Performance: Ensure our platform is scalable, secure, and highly available across global regions. Security & Reliability: Champion More ❯
scale ML models across our global operations. You'll collaborate with leading researchers, hardware experts, and software engineers to build robust solutions that maximize the potential of GPU acceleration, distributedcomputing, and the latest open-source tools. Your work will influence our trading strategies by accelerating experimentation cycles that foster continuous innovation and refinement. This is a unique … at the intersection of advanced machine learning and trading, where your contributions will shape the future of IMC’s technology and trading capabilities. Your Core Responsibilities: Develop large-scale distributed training pipelines to manage datasets and complex models Build and optimize low-latency inference pipelines, ensuring models deliver real-time predictions in production systems Develop libraries to improve the … Python, CUDA, or C++ Knowledge of machine learning frameworks such as PyTorch, TensorFlow, or JAX Proficiency in GPU programming for training and inference acceleration (e.g., CuDNN, TensorRT) Experience with distributed training for scaling ML workloads (e.g., Horovod, NCCL) Exposure to cloud platforms and orchestration tools A track record of contributing to open-source projects in machine learning, data science More ❯
or journals - Experience programming in Java, C++, Python or related language - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributedcomputing, high-performance computing PREFERRED QUALIFICATIONS - Experience using Unix/Linux - Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for More ❯
field experience - Experience programming in Java, C++, Python or related language - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributedcomputing, high-performance computing - Experience in state-of-the-art deep learning models architecture design and deep learning training and optimization and model pruning PREFERRED QUALIFICATIONS - Experience More ❯
that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global … experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. … degree in Computer Science, Engineering, related field, or equivalent experience - Knowledge of the primary aws services (ec2, elb, rds, route53 & s3) - Experience implementing AWS services in a variety of distributedcomputing environments PREFERRED QUALIFICATIONS - AWS experience preferred, with proficiency in a wide range of AWS services (e.g., EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation) - AWS Professional level certifications More ❯
hands-on delivery teams to accelerate their adoption of new technologies and practices. Delivery: Engagements may include on-site projects proving the use of AWS services to support new distributedcomputing solutions that often span private cloud and public cloud services. Engagements may include migration of existing applications and development of new applications using AWS cloud services. Insights … development, and deployment of business software at scale or recent hands-on technology infrastructure, network, compute, storage, and virtualization experience Experience and technical expertise (design and implementation) in cloud computing technologies Japanese or Korean language skills Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation More ❯
hands-on delivery teams to accelerate their adoption of new technologies and practices. Delivery - Engagements may include on-site projects proving the use of AWS services to support new distributedcomputing solutions that often span private cloud and public cloud services. Engagements may include migration of existing applications and development of new applications using AWS cloud services. Insights … experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. … Ruby, Go, Swift, Java, .Net, C++ or similar object-oriented language experience 5+ years of IT implementation experience PREFERRED QUALIFICATIONS Experience and technical expertise (design and implementation) in cloud computing technologies Experience leading the design, development and deployment of business software at scale or recent hands-on technology infrastructure, network, compute, storage, and virtualization experience Japanese or Korean language More ❯