London, England, United Kingdom Hybrid / WFH Options
PhysicsX
problems Design, build and optimise machine learning models with a focus on scalability and efficiency in our application domain Transform prototype model implementations to robust and optimised implementations Implement distributed training architectures (e.g., data parallelism, parameter server, etc.) for multi-node/multi-GPU training and explore federated learning capacity using cloud (e.g., AWS, Azure, GCP) and on-premise … or PhD in computer science, machine learning, applied statistics, mathematics, physics, engineering, software engineering, or a related field, with a record of experience in any of the following: Scientific computing; High-performance computing (CPU/GPU clusters); Parallelised/distributed training for large/foundation models Ideally >1 years of experience in a data-driven role, with … exposure to: scaling and optimising ML models, training and serving foundation models at scale (federated learning a bonus); distributedcomputing frameworks (e.g., Spark, Dask) and high-performance computing frameworks (MPI, OpenMP, CUDA, Triton); cloud computing (on hyper-scaler platforms, e.g., AWS, Azure, GCP); building machine learning models and pipelines in Python, using common libraries and frameworks More ❯
London, England, United Kingdom Hybrid / WFH Options
PhysicsX Ltd
problems. Design, build and optimise machine learning models with a focus on scalability and efficiency in our application domain. Transform prototype model implementations to robust and optimised implementations. Implement distributed training architectures (e.g., data parallelism, parameter server, etc.) for multi-node/multi-GPU training and explore federated learning capacity using cloud (e.g., AWS, Azure, GCP) and on-premise … or PhD in computer science, machine learning, applied statistics, mathematics, physics, engineering, software engineering, or a related field, with a record of experience in any of the following: Scientific computing; High-performance computing (CPU/GPU clusters); Parallelised/distributed training for large/foundation models. Ideally >1 years of experience in a data-driven role, with … exposure to: scaling and optimising ML models, training and serving foundation models at scale (federated learning a bonus); distributedcomputing frameworks (e.g., Spark, Dask) and high-performance computing frameworks (MPI, OpenMP, CUDA, Triton); cloud computing (on hyper-scaler platforms, e.g., AWS, Azure, GCP); building machine learning models and pipelines in Python, using common libraries and frameworks More ❯
London, England, United Kingdom Hybrid / WFH Options
Autodesk
with a talented team to build and deploy scalable data pipelines to aggregate, prepare, and process data for use with machine learning. Your skills span across data processing and distributed systems with a software engineering base. You are excited to collaborate with ML engineers to build generative AI features in Autodesk products. You will report to Senior Manager, Autodesk … to work remotely, in an office, or a mix of both. Responsibilities Collaborate on engineering projects for product with a diverse, global team of researchers and engineers Develop scalable distributed systems to process, filter, and deploy datasets for use with machine learning Process large, unstructured, multi-modal (text, images, 3D models, code snippets, metadata) data sources into formats suitable … such as AWS, Azure, and GCP Containerization technologies, such as Docker and Kubernetes Documenting code, architectures, and experiments Linux systems and bash terminals Preferred Qualifications Hands-on experience with: Distributedcomputing frameworks, such as Ray Data and Spark. Databases and/or data warehousing technologies, such as Apache Hive. Data transformation via SQL and DBT. Orchestration platforms, such More ❯
London, England, United Kingdom Hybrid / WFH Options
Autodesk
and platform engineers to build and deploy scalable data pipelines to aggregate, prepare, and process data for use with machine learning . Your skills span across data processing and distributed systems with a software engineering base. You are excited to collaborate with ML engineers to build generative AI features in Autodesk products, and comfortable working at the intersection of … to work remotely, in an office, or a mix of both. Responsibilities Collaborate on engineering projects for product with a diverse, global team of researchers and engineers Develop scalable distributed systems to process, filter, and deploy datasets for use with machine learning Process large, unstructured, multi-modal (text, images, 3D models, code snippets, metadata) data sources into formats suitable … have experience in data modelling, architecture, and processing skills with varied unstructured data representations Processing unstructured data, such as 3D geometric data Large scale, data-intensive systems in production Distributedcomputing frameworks, such as Spark, Dask, Ray Data etc. Cloud platforms such as AWS, Azure, or GCP Docker Documenting code, architectures, and experiments Linux systems and bash terminals More ❯
London, England, United Kingdom Hybrid / WFH Options
Autodesk
of scientists, research engineers, and platform engineers to build anddeployscalable data pipelines to aggregate, prepare, and process data for use withmachine learning. Your skills span across data processing and distributed systems with a software engineering base. You are excited to collaborate with ML engineers to build generative AI features in Autodesk products, and comfortable working at the intersection of … to work remotely, in an office, or a mix of both. Responsibilities · Collaborate on engineering projects for product with a diverse, global team of researchers and engineers · Develop scalable distributed systems to process, filter, anddeploydatasets for use withmachine learning · Process large, unstructured,multi-modal(text, images,3Dmodels, code snippets, metadata) data sources into formats suitable formachine learning · Conduct and … pipelines · You have experience in data modelling, architecture, and processing skills with varied unstructured data representations · Processing unstructured data, such as3Dgeometric data · Large scale, data-intensive systems in production · Distributedcomputing frameworks, such as Spark, Dask, Ray Data etc. · Cloud platforms such as AWS, Azure, or GCP · Documenting code, architectures, and experiments · Linux systems and bash terminals Preferred More ❯
and building solutions. You will collaborate with scientists, research engineers, and platform engineers to build and deploy scalable data pipelines for machine learning. Your expertise spans data processing and distributed systems, with a focus on software engineering. You will work at the intersection of research and product, building generative AI features in Autodesk products. You will report to: Manager … supports a hybrid work environment, allowing remote, in-office, or mixed work arrangements. Responsibilities Collaborate on engineering projects with a diverse, global team of researchers and engineers. Develop scalable distributed systems for processing, filtering, and deploying datasets for machine learning. Process large, unstructured, multi-modal data sources (text, images, 3D models, code, metadata) into ML-ready formats. Conduct experiments … testing, and deployment. Experience in data modeling, architecture, and processing unstructured data. Experience with processing 3D geometric data. Experience with large-scale, data-intensive systems in production. Knowledge of distributedcomputing frameworks (Spark, Dask, Ray). Experience with cloud platforms (AWS, Azure, GCP). Proficiency with Docker, Linux, and bash. Ability to document code, architectures, and experiments. Preferred More ❯
Bath, England, United Kingdom Hybrid / WFH Options
Autodesk
engineers, and platform engineers to build and deploy scalable data pipelines to aggregate, prepare, and process data for use with machine learning. Your skills span across data processing and distributed systems with a software engineering base. You are excited to collaborate with ML engineers to build generative AI features in Autodesk products, and comfortable working at the intersection of … to work remotely, in an office, or a mix of both. Responsibilities · Collaborate on engineering projects for product with a diverse, global team of researchers and engineers · Develop scalable distributed systems to process, filter, and deploy datasets for use with machine learning · Process large, unstructured, multi-modal (text, images, 3D models, code snippets, metadata) data sources into formats suitable … have experience in data modelling, architecture, and processing skills with varied unstructured data representations · Processing unstructured data, such as 3D geometric data · Large scale, data-intensive systems in production · Distributedcomputing frameworks, such as Spark, Dask, Ray Data etc. · Cloud platforms such as AWS, Azure, or GCP · Docker · Documenting code, architectures, and experiments · Linux systems and bash terminals More ❯
Cambridge, England, United Kingdom Hybrid / WFH Options
Autodesk
engineers, and platform engineers to build and deploy scalable data pipelines to aggregate, prepare, and process data for use with machine learning. Your skills span across data processing and distributed systems with a software engineering base. You are excited to collaborate with ML engineers to build generative AI features in Autodesk products, and comfortable working at the intersection of … to work remotely, in an office, or a mix of both. Responsibilities · Collaborate on engineering projects for product with a diverse, global team of researchers and engineers · Develop scalable distributed systems to process, filter, and deploy datasets for use with machine learning · Process large, unstructured, multi-modal (text, images, 3D models, code snippets, metadata) data sources into formats suitable … have experience in data modelling, architecture, and processing skills with varied unstructured data representations · Processing unstructured data, such as 3D geometric data · Large scale, data-intensive systems in production · Distributedcomputing frameworks, such as Spark, Dask, Ray Data etc. · Cloud platforms such as AWS, Azure, or GCP · Docker · Documenting code, architectures, and experiments · Linux systems and bash terminals More ❯
Sheffield, England, United Kingdom Hybrid / WFH Options
Autodesk
engineers, and platform engineers to build and deploy scalable data pipelines to aggregate, prepare, and process data for use with machine learning. Your skills span across data processing and distributed systems with a software engineering base. You are excited to collaborate with ML engineers to build generative AI features in Autodesk products, and comfortable working at the intersection of … to work remotely, in an office, or a mix of both. Responsibilities · Collaborate on engineering projects for product with a diverse, global team of researchers and engineers · Develop scalable distributed systems to process, filter, and deploy datasets for use with machine learning · Process large, unstructured, multi-modal (text, images, 3D models, code snippets, metadata) data sources into formats suitable … have experience in data modelling, architecture, and processing skills with varied unstructured data representations · Processing unstructured data, such as 3D geometric data · Large scale, data-intensive systems in production · Distributedcomputing frameworks, such as Spark, Dask, Ray Data etc. · Cloud platforms such as AWS, Azure, or GCP · Docker · Documenting code, architectures, and experiments · Linux systems and bash terminals More ❯
Newbury, England, United Kingdom Hybrid / WFH Options
Autodesk
engineers, and platform engineers to build and deploy scalable data pipelines to aggregate, prepare, and process data for use with machine learning. Your skills span across data processing and distributed systems with a software engineering base. You are excited to collaborate with ML engineers to build generative AI features in Autodesk products, and comfortable working at the intersection of … to work remotely, in an office, or a mix of both. Responsibilities · Collaborate on engineering projects for product with a diverse, global team of researchers and engineers · Develop scalable distributed systems to process, filter, and deploy datasets for use with machine learning · Process large, unstructured, multi-modal (text, images, 3D models, code snippets, metadata) data sources into formats suitable … have experience in data modelling, architecture, and processing skills with varied unstructured data representations · Processing unstructured data, such as 3D geometric data · Large scale, data-intensive systems in production · Distributedcomputing frameworks, such as Spark, Dask, Ray Data etc. · Cloud platforms such as AWS, Azure, or GCP · Docker · Documenting code, architectures, and experiments · Linux systems and bash terminals More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
Autodesk
engineers, and platform engineers to build and deploy scalable data pipelines to aggregate, prepare, and process data for use with machine learning. Your skills span across data processing and distributed systems with a software engineering base. You are excited to collaborate with ML engineers to build generative AI features in Autodesk products, and comfortable working at the intersection of … to work remotely, in an office, or a mix of both. Responsibilities · Collaborate on engineering projects for product with a diverse, global team of researchers and engineers · Develop scalable distributed systems to process, filter, and deploy datasets for use with machine learning · Process large, unstructured, multi-modal (text, images, 3D models, code snippets, metadata) data sources into formats suitable … have experience in data modelling, architecture, and processing skills with varied unstructured data representations · Processing unstructured data, such as 3D geometric data · Large scale, data-intensive systems in production · Distributedcomputing frameworks, such as Spark, Dask, Ray Data etc. · Cloud platforms such as AWS, Azure, or GCP · Docker · Documenting code, architectures, and experiments · Linux systems and bash terminals More ❯
Stockton-on-Tees, England, United Kingdom Hybrid / WFH Options
Autodesk
engineers, and platform engineers to build and deploy scalable data pipelines to aggregate, prepare, and process data for use with machine learning. Your skills span across data processing and distributed systems with a software engineering base. You are excited to collaborate with ML engineers to build generative AI features in Autodesk products, and comfortable working at the intersection of … to work remotely, in an office, or a mix of both. Responsibilities · Collaborate on engineering projects for product with a diverse, global team of researchers and engineers · Develop scalable distributed systems to process, filter, and deploy datasets for use with machine learning · Process large, unstructured, multi-modal (text, images, 3D models, code snippets, metadata) data sources into formats suitable … have experience in data modelling, architecture, and processing skills with varied unstructured data representations · Processing unstructured data, such as 3D geometric data · Large scale, data-intensive systems in production · Distributedcomputing frameworks, such as Spark, Dask, Ray Data etc. · Cloud platforms such as AWS, Azure, or GCP · Docker · Documenting code, architectures, and experiments · Linux systems and bash terminals More ❯
London, England, United Kingdom Hybrid / WFH Options
Cloudbeds
Fast 500 again in 2024 - but we're just getting started. How You'll Make an Impact: As a Senior Data Engineer , you'll design and implement large-scale distributed data processing systems using technologies like Apache Hadoop, Spark, and Flink. You'll build robust data pipelines and infrastructure that transform complex data into actionable insights, ensuring scalability and … platform that processes billions in bookings annually. You'll architect data lakes, warehouses, and real-time streaming platforms while implementing security measures and optimizing performance. With your expertise in distributedcomputing, containerization (Docker, Kubernetes), and streaming technologies (Kafka, Confluent), you'll drive innovation and evaluate new technologies to continuously improve our data ecosystem. Our Data team: We're … expected, and collective wins matter more than individual credit. What You Bring to the Team: Technical Expertise & Scalability Mindset: Deep knowledge of data architecture, ETL/ELT pipelines, and distributed systems, with the ability to design scalable, high-performance solutions. Problem-Solving & Ownership: A proactive approach to diagnosing issues, improving infrastructure, and taking full ownership from concept to production. More ❯
London, England, United Kingdom Hybrid / WFH Options
PhysicsX
concepts and best practices (e.g., versioning, testing, CI/CD, API design, MLOps) Building machine learning models and pipelines in Python, using common libraries and frameworks (e.g., TensorFlow, MLFlow) Distributedcomputing frameworks (e.g., Spark, Dask) Cloud platforms (e.g., AWS, Azure, GCP) and HP computing Containerization and orchestration (Docker, Kubernetes) Strong problem-solving skills and the ability to More ❯
Whetstone, England, United Kingdom Hybrid / WFH Options
PhysicsX
concepts and best practices (e.g., versioning, testing, CI/CD, API design, MLOps) Building machine learning models and pipelines in Python, using common libraries and frameworks (e.g., TensorFlow, MLFlow) Distributedcomputing frameworks (e.g., Spark, Dask) Cloud platforms (e.g., AWS, Azure, GCP) and HP computing Containerization and orchestration (Docker, Kubernetes) Strong problem-solving skills and the ability to More ❯
London, England, United Kingdom Hybrid / WFH Options
Obinex
CEX), including the matching engine, order execution, and real-time APIs. Design and optimize a scalable blockchain architecture, focusing on consensus mechanisms, cryptographic security, and decentralized networking. Leverage cloud computing solutions (AWS, GCP, or Azure) to ensure high availability, security, and performance. Optimize the exchange and blockchain infrastructure to handle thousands of transactions per second with low-latency execution. … technology roadmap. Write high-performance, scalable, and secure code in Rust, Go, C++, Java, or Python. What will you bring? Master in Computer Science, with a specialization in Cloud Computing, Blockchain, or Distributed Systems. 5+ years of hands-on experience in software engineering, ideally in fintech, blockchain, or high-performance trading systems. Deep expertise in blockchain architecture, including … consensus algorithms (PoW, PoS, BFT), cryptography, and smart contracts. Strong background in cloud computing architectures (AWS, GCP, Azure) and distributedcomputing frameworks (Kubernetes, Kafka, etc.). Experience in high-frequency trading (HFT) systems, low-latency networking, and distributed database optimization. Security-first mindset, with knowledge of HSM, MPC, multi-signature wallets, and advanced cryptographic protocols. Proficiency More ❯
East London, London, United Kingdom Hybrid / WFH Options
McGregor Boyall Associates Limited
Spark, PySpark, TensorFlow . Strong knowledge of LLM algorithms and training techniques . Experience deploying models in production environments. Nice to Have: Experience in GenAI/LLMs Familiarity with distributedcomputing tools (Hadoop, Hive, Spark). Background in banking, risk management, or capital markets . Why Join? This is a unique opportunity to work at the forefront of More ❯
London, England, United Kingdom Hybrid / WFH Options
Jgasurveyors
Python, SQL, Spark, PySpark, TensorFlow. Strong knowledge of LLM algorithms and training techniques. Experience deploying models in production environments. Nice To Have Experience in GenAI/LLMs Familiarity with distributedcomputing tools (Hadoop, Hive, Spark). Background in banking, risk management, or capital markets. Why Join? This is a unique opportunity to work at the forefront of AI More ❯
City of London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
Spark, PySpark, TensorFlow . Strong knowledge of LLM algorithms and training techniques . Experience deploying models in production environments. Nice to Have: Experience in GenAI/LLMs Familiarity with distributedcomputing tools (Hadoop, Hive, Spark). Background in banking, risk management, or capital markets . Why Join? This is a unique opportunity to work at the forefront of More ❯
City of London, London, United Kingdom Hybrid / WFH Options
CONQUER IT
Google AI Platform) o Expertise in data pre-processing, feature engineering, and model evaluation o Understanding of software engineering principles (version control, CI/CD, containerization) o Familiarity with distributedcomputing and big data tools (Spark, Hadoop) o Ability to optimize models for performance and scalability o Experience with Azure AI Search More ❯
Google AI Platform) o Expertise in data pre-processing, feature engineering, and model evaluation o Understanding of software engineering principles (version control, CI/CD, containerization) o Familiarity with distributedcomputing and big data tools (Spark, Hadoop) o Ability to optimize models for performance and scalability o Experience with Azure AI Search More ❯
London, England, United Kingdom Hybrid / WFH Options
InstaDeep Ltd
AI revolution! About DeepPCB: DeepPCB is InstaDeep’s AI-powered Place & Route PCB (Printed Circuit Board) design tool. We use a combination of deep reinforcement learning and high-performance computing to automate and scale PCB place-and-route workflows, accelerating hardware innovation globally. We are looking for a Machine Learning Engineer to join the DeepPCB team and help push … engineers to bring ideas to life. Responsibilities: Develop scalable and efficient machine learning algorithms to tackle PCB place-and-route challenges. Adapt and optimize ML models for large-scale distributedcomputing environments (e.g., GPUs, multi-node clusters). Build, test, and deploy robust production-level ML systems integrated into the DeepPCB platform. Collaborate with research scientists, software engineers … thrive in a fast-paced, collaborative, and dynamic environment. Nice to haves: Prior experience with PCB design, EDA tools, or related optimization problems. Hands-on experience in high-performance computing environments (e.g., Kubernetes, Ray, Dask). Contributions to open-source projects, publications, or top placements in ML competitions (e.g., Kaggle). Expertise in related fields such as Computer Vision More ❯
.NET, and advanced SQL (T-SQL). Experience with CI/CD pipelines using Azure DevOps and infrastructure as code (Terraform, BICEP, ARM). Solid understanding of data engineering, distributedcomputing, and cloud-native design. Experience in data modelling, metadata management, and data quality practices. Strong communication and stakeholder engagement skills. Self-starter with a proactive mindset and More ❯
West Midlands, England, United Kingdom Hybrid / WFH Options
Amtis - Digital, Technology, Transformation
.NET, and advanced SQL (T-SQL). Experience with CI/CD pipelines using Azure DevOps and infrastructure as code (Terraform, BICEP, ARM). Solid understanding of data engineering, distributedcomputing, and cloud-native design. Experience in data modelling, metadata management, and data quality practices. Strong communication and stakeholder engagement skills. Self-starter with a proactive mindset and More ❯
London, England, United Kingdom Hybrid / WFH Options
Goldman Sachs Bank AG
skills required to triage and resolve complex production issues and operate well in a fast-paced, high-pressure environment. A propensity to automate manual tasks, appreciation for large-scale, distributedcomputing systems, and a willingness to develop using a wide range of languages and frameworks will be necessary to succeed in the role. As part of a global … to quickly identify scope and impact of issues during high-pressure situations Solid communication and interpersonal skills Ability to multi-task and prioritize tasks effectively Preferred Qualifications Experience with distributed systems design, maintenance, and troubleshooting. Hands-on experience with debugging and optimizing code, as well as automation. Knowledge of financial markets FIX protocol knowledge ABOUT GOLDMAN SACHS At Goldman More ❯