Swindon, England, United Kingdom Hybrid / WFH Options
Zurich Insurance
developers e.g. use of Git. Experience using SQL and working with databases. Comfortable working with a variety of data sources, both structured and unstructured and very large datasets using distributedcomputing (e.g. spark). Experience working with cloud technology, ideally Microsoft Azure and/or AWS. Proven track record of development and deployment of machine learning algorithms, including More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
to deployment, including model evaluation and ongoing monitoring. Desirable Skills Familiarity with cloud platforms such as Azure (e.g., Azure ML, Data Factory). Experience in big data environments and distributedcomputing frameworks (e.g., Spark). Knowledge of business intelligence tools and their integration with data science workflows. Prior experience mentoring or leading a team of data scientists. Why More ❯
and monitoring in production systems Experience with cloud platforms (GCP, AWS or Azure), especially managed ML services like SageMaker or Vertex AI Proficiency in SQL and working knowledge of distributedcomputing tools like Spark or Dask Nice to Have: Experience with natural language processing (NLP) e.g., using LLMs, transformers, text classification Familiarity with Graph ML (e.g., for customer More ❯
london (city of london), south east england, united kingdom
In Technology Group
and monitoring in production systems Experience with cloud platforms (GCP, AWS or Azure), especially managed ML services like SageMaker or Vertex AI Proficiency in SQL and working knowledge of distributedcomputing tools like Spark or Dask Nice to Have: Experience with natural language processing (NLP) e.g., using LLMs, transformers, text classification Familiarity with Graph ML (e.g., for customer More ❯
and monitoring in production systems Experience with cloud platforms (GCP, AWS or Azure), especially managed ML services like SageMaker or Vertex AI Proficiency in SQL and working knowledge of distributedcomputing tools like Spark or Dask Nice to Have: Experience with natural language processing (NLP) e.g., using LLMs, transformers, text classification Familiarity with Graph ML (e.g., for customer More ❯
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Data Intellect
Deep experience with Python, SQL and/or Scala Knowledge of two or more common Cloud ecosystems (Azure, AWS, GCP) with expertise in at least one. Deep experience with distributedcomputing with Apache Spark Working knowledge CI/CD for production deployments Working knowledge of MLOps Familiarity with designing and deploying performant end-to-end data architectures Experience More ❯
Swindon, Wiltshire, United Kingdom Hybrid / WFH Options
Zurich 56 Company Ltd
developers e.g. use of Git. Experience using SQL and working with databases. Comfortable working with a variety of data sources, both structured and unstructured and very large datasets using distributedcomputing (e.g. spark). Experience working with cloud technology, ideally Microsoft Azure and/or AWS. Proven track record of development and deployment of machine learning algorithms, including More ❯
Fareham, England, United Kingdom Hybrid / WFH Options
Zurich Insurance
developers e.g. use of Git. Experience using SQL and working with databases. Comfortable working with a variety of data sources, both structured and unstructured and very large datasets using distributedcomputing (e.g. spark). Experience working with cloud technology, ideally Microsoft Azure and/or AWS. Proven track record of development and deployment of machine learning algorithms, including More ❯
language processing (NLP) or computer vision (CV). Knowledge of MLOps and model deployment pipelines. Familiarity with AI ethics and responsible AI principles. Experience working with large datasets and distributed computing. Contributions to open-source AI projects. WE OFFER: The benefits depend upon the location where you apply from, so feel free to check with the recruiter. A flexible More ❯
Prototype and testing AI models before full-scale deployment. Skills & Experience Technical Expertise Understanding of transformer architectures, and large-scale language models. Experience with data engineering, model optimisation, and distributed computing. Strong programming skills in JavaScript, or Python/other AI-related languages. Strong SQL and data analytics skills. Familiarity with cloud platforms (AWS and Azure) for AI deployment. More ❯
We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences - the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve … training and inference performance on AMD GPUs. Collaborate with Open-Source Maintainers: Engage with framework maintainers to ensure code changes are aligned with requirements and integrated upstream. Work in DistributedComputing Environments: Optimize deep learning performance on both scale-up (multi-GPU) and scale-out (multi-node) systems. Utilize Cutting-Edge Compiler Tech: Leverage advanced compiler technologies to … scaling and throughput. Software Engineering: Proficient skills in Python and C++, with experience in debugging, performance tuning, and test design to ensure high-quality, maintainable software solutions. High-Performance Computing: Proficient experience in running large-scale workloads on heterogeneous compute clusters, optimizing for efficiency and scalability. Compiler Optimization: Solid understanding of compiler theory and tools like LLVM and ROCm More ❯
years. Key job responsibilities Collaborate with experienced cross-disciplinary Amazonians to conceive, design, and bring to market innovative products and services. Design and build innovative technologies in a large distributedcomputing environment and help lead fundamental changes in the industry. Create solutions to run predictions on distributed systems with exposure to innovative technologies at incredible scale and … speed. Build distributed storage, index, and query systems that are scalable, fault-tolerant, low cost, and easy to manage/use. Work in an agile environment to deliver high quality software. Basic Qualifications Graduated less than 24 months ago or about to complete a Bachelor’s or Master’s Degree in Computer Science, Computer Engineering, or related fields at … Knowledge of Computer Science fundamentals such as object-oriented design, algorithm design, data structures, problem solving and complexity analysis. Preferred Qualifications Previous technical internship(s) if applicable Experience with distributed, multi-tiered systems, algorithms, and relational databases Experience in optimization mathematics such as linear programming and nonlinear optimisation Ability to effectively articulate technical challenges and solutions Adept at handling More ❯
Spark and Scala. Hands-on experience with Hadoop. Proficiency with data processing frameworks like Kafka and Spark. Experience with database engines such as Oracle, PostgreSQL, Teradata, Cassandra. Understanding of distributedcomputing technologies, approaches, and patterns. Nice to have Experience with Data Lakes, Data Warehousing, or analytics systems. We offer Opportunity to work on cutting-edge projects. Collaboration with More ❯
DistributedComputing Application Engineer - Systematic Hedge Fund - London Our client, a leading systematic hedge fund, is seeking a DistributedComputing Application Engineer to join their London-based team. This role involves collaborating across multiple business units to architect and optimise large-scale, compute-intensive work flows spanning global locations. You will work with cutting-edge platforms … such as Ray and YellowDog, driving the integration and support of distributedcomputing solutions to enhance performance and scalability in complex environments. Key Responsibilities: Partner with business teams to embed distributedcomputing into core workflows. Optimise applications for high performance on distributed platforms. Provide architectural and technical leadership in the design and development of distributed systems. Design, implement, and manage distributedcomputing solutions using Ray and YellowDog. Required Skills & Experience: Bachelor's degree in Computer Science, Engineering, or a related field. Deep understanding of loosely and tightly coupled workloads. Hands-on experience with HPC platforms and job/resource scheduling. Proficiency in cloud platforms such as AWS, Azure, or Google Cloud Platform More ❯
london (city of london), south east england, united kingdom
Selby Jennings
DistributedComputing Application Engineer - Systematic Hedge Fund - London Our client, a leading systematic hedge fund, is seeking a DistributedComputing Application Engineer to join their London-based team. This role involves collaborating across multiple business units to architect and optimise large-scale, compute-intensive work flows spanning global locations. You will work with cutting-edge platforms … such as Ray and YellowDog, driving the integration and support of distributedcomputing solutions to enhance performance and scalability in complex environments. Key Responsibilities: Partner with business teams to embed distributedcomputing into core workflows. Optimise applications for high performance on distributed platforms. Provide architectural and technical leadership in the design and development of distributed systems. Design, implement, and manage distributedcomputing solutions using Ray and YellowDog. Required Skills & Experience: Bachelor's degree in Computer Science, Engineering, or a related field. Deep understanding of loosely and tightly coupled workloads. Hands-on experience with HPC platforms and job/resource scheduling. Proficiency in cloud platforms such as AWS, Azure, or Google Cloud Platform More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Axelera AI
commitment to innovation has firmly established us as a global industry pioneer. Are you up for the challenge? Position Overview Axelera AI is looking for an experienced Senior Engineer – Computing Architecture with a strong software engineering and tooling background to support the development of state-of-the-art AI computing systems. In this role, you will work as … a software engineer in the Architecture team and be responsible for designing, implementing, and maintaining the foundational tools and methodologies required for developing complex integrated circuits and computing architectures. As part of the Architecture team, you will collaborate with cross-disciplinary teams, ensuring that our design environment is optimized for efficiency, accuracy, and scalability. This is an exciting opportunity … solutions while contributing to breakthrough advancements in AI. Key responsibilities: Tooling Infrastructure : Develop and optimize custom tooling frameworks and workflows to support the architecture, design, and verification of AI computing systems. Automation : Create robust scripts and automation pipelines to streamline IC development, reduce manual effort, and improve design quality. Design Collaboration : Work closely with architecture, hardware, and software teams More ❯
collaborate with talented engineers to build and adopt common tools, platforms, and applications. Our solutions are delivered as software products or hosted services, using technologies such as Java, Cloud computing, HDFS, Spark, S3, ReactJS, and Sybase IQ. Key challenges include acquiring high-quality data, storing it, performing rapid risk computations via distributedcomputing, and enabling actionable insights … in building reports using SQL and data visualization tools like Tableau Web development skills for risk management UI applications Development experience with databases such as Snowflake, Sybase IQ, and distributed systems like HDFS Ability to interact with business users for issue resolution Design and support batch processes with scheduling infrastructure Leadership in guiding junior team members through SDLC phases …/Ant) Experience with process scheduling platforms like Apache Airflow Willingness to work with proprietary technologies like Slang/SECDB Understanding of compute resources and performance metrics Knowledge of distributedcomputing frameworks like DASK and cloud processing Experience managing projects through entire SDLC About Goldman Sachs Goldman Sachs is a leading global investment banking, securities, and investment management More ❯
developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Cloud computing, HDFS, Spark, S3, ReactJS, Sybase IQ among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing … risk computations in limited amount of time using distributedcomputing, and making data available to enable actionable risk insights through analytical and response user interfaces. WHAT WE LOOK FOR Senior Developer in large projects across a global team of developers and risk managers Performance tune applications to improve memory and CPU utilization. Perform statistical analyses to identify trends … Tableau. Utilize web development technologies to facilitate application development for front end UI used for risk management actions Develop software for calculations using databases like Snowflake, Sybase IQ and distributed HDFS systems. Interact with business users for resolving issues with applications. Design and support batch processes using scheduling infrastructure for calculation and distributing data to other systems. Oversee junior More ❯
developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Cloud computing, HDFS, Spark, S3, ReactJS, Sybase IQ among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing … risk computations in limited amount of time using distributedcomputing, and making data available to enable actionable risk insights through analytical and response user interfaces. WHAT WE LOOK FOR Senior Developer in large projects across a global team of developers and risk managers Performance tune applications to improve memory and CPU utilization. Perform statistical analyses to identify trends … Tableau. Utilize web development technologies to facilitate application development for front end UI used for risk management actions Develop software for calculations using databases like Snowflake, Sybase IQ and distributed HDFS systems. Interact with business users for resolving issues with applications. Design and support batch processes using scheduling infrastructure for calculation and distributing data to other systems. Oversee junior More ❯
developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Cloud computing, HDFS, Spark, S3, ReactJS, Sybase IQ among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing … risk computations in limited amount of time using distributedcomputing, and making data available to enable actionable risk insights through analytical and response user interfaces. WHAT WE LOOK FOR • Senior Developer in large projects across a global team of developers and risk managers • Performance tune applications to improve memory and CPU utilization. • Perform statistical analyses to identify trends … Tableau. • Utilize web development technologies to facilitate application development for front end UI used for risk management actions • Develop software for calculations using databases like Snowflake, Sybase IQ and distributed HDFS systems. • Interact with business users for resolving issues with applications. • Design and support batch processes using scheduling infrastructure for calculation and distributing data to other systems. • Oversee junior More ❯
scaling our operations in London. Our work environment rewards innovation, speed, and bold thinking. The role We’re hiring Senior and Staff Software Engineers to build the high-performance computing infrastructure that powers our Optical Tensor Processing Units (OTPUs). This isn’t just about scaling models—it’s about rethinking how AI workloads are executed at speed and … scale. You’ll lead the design and implementation of software systems that run distributed, low-latency inference across clusters. You’ll work closely with hardware and ML teams to optimise every layer of the stack—from model representation and execution to data movement and scheduling. Whether it’s through compiler techniques, systems-level tuning, or custom runtime design, you … large-scale scientific compute, or AI infrastructure at serious scale, we’d love to talk. Responsibilities Design and build high-performance systems for running AI/ML workloads across distributed compute clusters Optimise for ultra-low latency and real-time inference at scale—profiling, tuning, and rewriting critical systems as needed Identify and resolve performance bottlenecks across the stack More ❯
scaling our operations in London. Our work environment rewards innovation, speed, and bold thinking. The role We’re hiring Senior and Staff Software Engineers to build the high-performance computing infrastructure that powers our OTPUs. This role involves rethinking how AI workloads are executed at speed and scale, focusing on designing and implementing software systems for distributed, low … latency inference across clusters. Responsibilities Design and build high-performance systems for running AI/ML workloads across distributed compute clusters Optimize for ultra-low latency and real-time inference at scale—profiling, tuning, and rewriting critical systems as needed Identify and resolve performance bottlenecks across the stack, from model execution and scheduling to hardware-level constraints Collaborate with … AI infrastructure, compute systems, and compiler tooling Skills & Experience 5+ years of experience building performance-critical systems in HPC, HFT, large-scale simulation, or AI infrastructure Deep understanding of distributed systems with a focus on real-time or near real-time data processing Strong programming skills in C++ and Python for performance-sensitive applications Hands-on experience with ML More ❯
challenges. GBI is looking for a Head of Scientific Compute to shape the future of computational research by leading the development of cutting-edge infrastructure to support high-performance computing, AI-driven analytics, and large-scale data processing. Working at the intersection of genomics, synthetic biology, and chemistry, you'll collaborate with researchers and technical teams to design tailored … computational solutions, ensuring compliance with data management best practices. You'll recruit, mentor, and lead a multidisciplinary team while optimising computing resources for scalability and efficiency. Key Responsibilities: Lead the development and implementation of a strategic vision for scientific computing, integrating advanced technologies to support large-scale data processing, high-performance computing, and AI-driven analytics in … genomic research. Oversee and manage the scientific computing infrastructure, ensuring it meets the evolving needs of computational biology, data analysis, and laboratory automation across diverse research programs. Collaborate closely with researchers and technical teams to design and implement computational solutions tailored to applications in synthetic biology, chemistry, and genomics. Recruit, mentor, and lead a multidisciplinary team, fostering a culture More ❯
scaling our operations in London. Our work environment rewards innovation, speed, and bold thinking. The role We’re hiring Senior and Staff Software Engineers to build the high-performance computing infrastructure that powers our OTPUs. This role involves rethinking how AI workloads are executed at speed and scale, focusing on software systems that run distributed, low-latency inference … candidates have experience in HFT, large-scale scientific compute, or AI infrastructure at a serious scale. Responsibilities Design and build high-performance systems for AI/ML workloads across distributed compute clusters. Optimize for ultra-low latency and real-time inference at scale—profiling, tuning, and rewriting critical systems as needed. Identify and resolve performance bottlenecks across the stack … AI infrastructure, compute systems, and compiler tooling. Skills & Experience 5+ years of experience in performance-critical systems in HPC, HFT, large-scale simulation, or AI infrastructure. Deep understanding of distributed systems, especially real-time or near real-time data processing. Strong programming skills in C++ and Python for performance-sensitive applications. Hands-on experience with ML compilers (LLVM, MLIR More ❯
scaling our operations in London. Our work environment rewards innovation, speed, and bold thinking. The role We’re hiring Senior and Staff Software Engineers to build the high-performance computing infrastructure that powers our Optical Tensor Processing Units (OTPUs). This isn’t just about scaling models—it’s about rethinking how AI workloads are executed at speed and … scale. You’ll lead the design and implementation of software systems that run distributed, low-latency inference across clusters. You’ll work closely with hardware and ML teams to optimise every layer of the stack—from model representation and execution to data movement and scheduling. Whether it’s through compiler techniques, systems-level tuning, or custom runtime design, you … large-scale scientific compute, or AI infrastructure at serious scale, we’d love to talk. Responsibilities Design and build high-performance systems for running AI/ML workloads across distributed compute clusters Optimise for ultra-low latency and real-time inference at scale—profiling, tuning, and rewriting critical systems as needed Identify and resolve performance bottlenecks across the stack More ❯