Pandas, NumPy, Scikit-Learn, TensorFlow, PyTorch). Solid understanding of ML and data pipeline architectures and best practices. Experience with big data technologies and distributedcomputing (e.g., Spark, Hadoop) is a plus. Proficient in SQL and experience with relational databases. Strong analytical and problem-solving skills, with a more »
mobile development, cloud, integration, security, etc. " Application dev experience with at least one of the cloud providers - Amazon AWS or MS Azure " Understanding of distributedcomputing paradigm and exposure to building highly scalable systems. " Experience with platform modernization and cloud migration projects " Expertise in Agile development methodologies like more »
record leading complex ETL and Data Infrastructure projects, as well as designing and building data intensive applications and services. · Experience with data processing and distributedcomputing frameworks such as Apache Spark · Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin · Deep knowledge of more »
London, England, United Kingdom Hybrid / WFH Options
Client Server
FX You have a strong knowledge of Linux OS/Systems Administration You have a good understanding of computer architecture, databases, real-time systems, distributedcomputing, OOP, Data Structures, Design Patterns, Algorithms You're collaborative with excellent English language communication skills You are degree educated, likely to MSc more »
record leading complex ETL and Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributedcomputing frameworks such as Apache Spark Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin Deep knowledge of more »
inference of these models. You will develop the core inference engine to seamlessly deploy large machine learning models to customers at scale and across distributed systems, contributing significantly to the automated pipeline, optimizing for high throughput training runs and rapid experimentation while achieving top hardware efficiency. Please note that … professional experience in a similar capacity. Qualifications: We are seeking candidates with exceptional ML engineering evidenced by: Experience in creating and managing high-performance computing clusters across GPU/TPU, preferably in PyTorch. Proficiency in efficient serving of large machine learning models at scale, including quantization and distributedcomputing, leveraging libraries such as deepspeed. Strong software engineering acumen with expertise in software design/architecture, particularly in Python. Understanding of the latest AI research and ability to efficiently implement these systems. Prior experience at a leading machine learning company (OpenAI, DeepMind, Meta, Anthropic, HuggingFace, etc.). more »
responsibility from Day 1. WHAT YOU’LL DO: Be an ambitious builder, working up and down the stack, mixing software engineering, data engineering, and distributed systems knowledge to build modern enterprise payment applications. Build reliable, high-throughput, low-latency microservices to power flawless cross-border transactions. Participate in the … solutions. Mentor and support the growth of junior engineers. WHAT YOU'LL BRING: 6+ years of hands-on Software Development experience on large-scale distributed systems, with the last 4-6+ years in Java or similar (Golang, Scala, etc). Experience in building transactional systems backed by modern … persistence technologies (Aurora, DynamoDB, etc.) Experience with Agile development of distributed services, with a focus on robust software design, scalability, and security. Experience building and deploying containerized applications into modern distributedcomputing environments (Kubernetes, Nomad, etc.) Eagerness to work openly and collaboratively with a diverse team Ability more »
experience in a similar capacity. Responsibilities: Develop the core inference engine used to serve large machine learning models to customers at scale and across distributed systems. Contribute significantly to the internal automated pipeline enabling high throughput training runs and rapid experimentation achieving top hardware efficiency optimisation. Collaborate in defining … of excellence that propels us forward. Qualifications: We are seeking candidates with exceptional ML engineering evidenced by: Experience in creating and managing high-performance computing clusters across GPU/TPU, preferably in PyTorch. Proficiency in efficient serving of large machine learning models at scale, including quantization and distributedcomputing, leveraging libraries such as deepspeed. Strong software engineering acumen with expertise in software design/architecture, particularly in Python. Understanding of the latest AI research and ability to efficiently implement these systems. Prior experience at a leading machine learning company (OpenAI, DeepMind, Meta, Anthropic, HuggingFace, etc.). more »
years Python experience required. Previously developed coding standards and extensive experience in CI/CD pipelines. Experience of developing multi-component architectures. Knowledge of distributedcomputing and serialisation techniques. DESIRABLE REQUIREMENTS Background in stochastic processes, probability and numerical analysis. Physics, Engineering or similar subjects is desirable, but not more »
components for both live trading and simulation Developing a seamless platform to handle all aspects of quant trading—model building, optimization, and trade execution Distributedcomputing Maintaining and updating the platform, ensuring its stability, robustness, and security Troubleshooting and resolving any systems related issues and handling the release more »
training across GPUs/TPUs, preferably using PyTorch and Kubernetes. Proficiency in efficient serving of large machine learning models at scale, including quantisation and distributed computing. Other: Experience at a leading machine learning company. Interested in the impacts of AI technology. A degree in Computer Science. Can demonstrate you more »
training across GPUs/TPUs, preferably using PyTorch and Kubernetes. Proficiency in efficient serving of large machine learning models at scale, including quantisation and distributed computing. General: Keen interest in AI and AI safety Can demonstrate you can work in a fast-growing early stage start-up A degree more »
Greater London, England, United Kingdom Hybrid / WFH Options
Anson McCade
/17/20) and developing low latency code. Bachelor's degree in a Quantitative Field; maths, statistics, computer science, etc Experience with DistributedComputing, Platform Development, Networking, and System Design. Exceptional analytical and quantitative skills BENEFITS Competitive base salary between £140,000 and £200,000 Bonus based more »
of trading ideally in a buy side environmentUnderstanding of Low Latency InfrastructureUnderstanding of Trade Order BooksIdeally Familiar with encryption algorithms, consensus mechanisms, security protocols, distributed computing.The environment is that of Facebook or Google, relaxed open with time to think and make the right decisions. The atmosphere is calm and more »
or leading a team. You understand core data & architectural principles, yet always seek to blend and balance real world considerations with pure theoretical approaches. Distributed Systems Experience: You have developed and worked with Big data architectures including delivering event driven solutions. You are aware of the fallacies of distributedcomputing and how this impacts and relates to complex data platforms and storage solutions. Innovation & Continuous Learning: You enjoy and actively seek to learn about new technologies and techniques in the Data and AI/ML space. You are an advocate for promoting and incorporating best practices around more »
plenty of Python and Google Cloud Platform experience. You’re used to working with large data sets from multiple sources, including simulation, optimisation and distributedcomputing tools. Ideally you have worked for another D2C or retail business and have some experience developing technology, data strategies and system designs more »
product is essential and ability to measure value delivery in Platform teamsYou are familiar with technologies like Trino, Apache Flink or similar.. You understand distributedcomputing principles and being able to work with large-scale data processing systems is a plusFamiliarity with Cloud Platforms and Services - like Amazon more »
trading and research infrastructure. You can expect to design and maintain the companies largest compute infrastructure which covers everything from OS, tooling and HPC computing for research and trading. You'll be working in a distributedcomputing environment with a primary focus on linux based systems. This more »
trading and research infrastructure. This team designs and maintains the firms largest compute infrastructure which includes operating system platforms, software development tooling, high-performance computing, networking and storage for research and trading. You’ll have the opportunity to work on a wide variety of technology initiatives in a distributedcomputing environment with a primary focus on Linux-based systems. This includes, workload scheduling design and implementation, fleet management, clustered file system design and operation, software design and life cycle (SDLC), kernel and network performance tuning for low-latency and high-throughput applications, metrics collection and data mining more »
a core focus on increasing developer productivity and overall developer experience. 💡 What You Need: Strong SWE skills - Python or Golang preferred. Expert knowledge of DistributedComputing technologies - Kubernetes strongly preferred. Competent Front-End chops - React & JavaScript preferred - open to similar frameworks/tech. Strong Automation & Config management tooling more »