a related field, with 10+ years of experience in software architecture (including 5+ years in AI/ML systems). Proven track record of designing and implementing large-scale, distributed AI systems. Deep understanding of AI and machine learning (natural language processing, computer vision, predictive modelling). Extensive experience with cloud platforms (AWS, Google Cloud, Azure) and their AI … databases and analysis tools. Knowledge of AI model serving frameworks like TensorFlow Serving or ONNX Runtime. Experience with AI ethics and bias mitigation techniques. Familiarity with GPU acceleration and distributedcomputing for AI workloads. Why Join warpSpeed: Career Growth and Impact: Technical Leadership and Innovation: Lead the architectural vision for a revolutionary Application AI platform. Opportunity to solve … complex technical challenges at the intersection of AI, cloud computing, and productivity tools. Influence the direction of AI applications in daily software tools. Career Growth and Impact: Play a key role in a rapidly growing startup that's redefining productivity in the AI era. Opportunity to shape the future of work and personal productivity for millions of users. Clear More ❯
engineering Required Skills & Qualifications 5+ years of experience in data engineering roles with progressively increasing responsibility Proven experience designing and implementing complex data pipelines at scale Strong knowledge of distributedcomputing frameworks (Spark, Hadoop ecosystem) Experience with cloud-based data platforms (AWS, Azure, GCP) Proficiency in data orchestration tools (Airflow, Prefect, Dagster, or similar) Solid programming skills in More ❯
scripts. Familiarity with ELT (Extract, Load, Transform) processes is a plus. Big Data Technologies : Familiarity with big data frameworks such as Apache Hadoop and Apache Spark, including experience with distributedcomputing and data processing. Cloud Platforms: Proficient in using cloud platforms (e.g., AWS, Google Cloud Platform, Microsoft Azure) for data storage, processing, and deployment of data solutions. Data More ❯
PySpark, Python, SQL with atleast 5 years of experience • Working experience in Palantir Foundry platform is must • Experience designing and implementing data analytics solutions on enterprise data platforms and distributedcomputing (Spark/Hive/Hadoop preferred). • Proven track record of understanding and transforming customer requirements into a best-fit design and architecture. • Demonstrated experience in end … Data Science or similar discipline. • and K8S systems is a plus. Person specification: • Knowledge of Insurance Domain or Financial Industry is a strong plus. • Experienced working in multicultural globally distributed teams. • Self-starter with a positive attitude and a willingness to learn, who can manage their own workload. • Strong analytical and problem-solving skills. • Strong interpersonal and communication skills More ❯
Design, build and optimise machine learning models with a focus on scalability and efficiency in our application domain Transform prototype implementations to robust production-grade implementation of models Explore distributed training architectures and federated learning capacity Create analytics environments and resources in the cloud or on-premise, spanning data engineering and science Identify the best libraries, frameworks and tools … and best practices (e.g., versioning, testing, CI/CD, API design, MLOps) Building machine learning models and pipelines in Python, using common libraries and frameworks (e.g., PyTorch, MLFlow, JAX) Distributedcomputing frameworks (e.g., Spark, Dask) Cloud platforms (e.g., AWS, Azure, GCP) and HP computing Containerization and orchestration (Docker, Kubernetes) Ability to scope and effectively deliver projects Strong More ❯
City of London, London, United Kingdom Hybrid / WFH Options
un:hurd music
consistent data from external APIs and ensuring seamless incorporation into existing systems. Big Data Management and Storage : Utilize PySpark for scalable processing of large datasets, implementing best practices for distributed computing. Optimize data storage and querying within a data lake environment to enhance accessibility and performance. ML R&D : Collaborate on model prototyping and development, identifying the most relevant … 3+ years of experience in applying machine learning in a commercial setting, with a track record of delivering impactful results. Extensive programming skills in Python, with a specialization in distributedcomputing libraries such as PySpark. Extensive experience with PyTorch (preferred) and/or TensorFlow. Hands-on experience with deploying machine learning models in production using cloud platforms, especially More ❯
consistent data from external APIs and ensuring seamless incorporation into existing systems. Big Data Management and Storage : Utilize PySpark for scalable processing of large datasets, implementing best practices for distributed computing. Optimize data storage and querying within a data lake environment to enhance accessibility and performance. ML R&D : Collaborate on model prototyping and development, identifying the most relevant … 3+ years of experience in applying machine learning in a commercial setting, with a track record of delivering impactful results. Extensive programming skills in Python, with a specialization in distributedcomputing libraries such as PySpark. Extensive experience with PyTorch (preferred) and/or TensorFlow. Hands-on experience with deploying machine learning models in production using cloud platforms, especially More ❯
South East London, England, United Kingdom Hybrid / WFH Options
un:hurd music
consistent data from external APIs and ensuring seamless incorporation into existing systems. Big Data Management and Storage : Utilize PySpark for scalable processing of large datasets, implementing best practices for distributed computing. Optimize data storage and querying within a data lake environment to enhance accessibility and performance. ML R&D : Collaborate on model prototyping and development, identifying the most relevant … 3+ years of experience in applying machine learning in a commercial setting, with a track record of delivering impactful results. Extensive programming skills in Python, with a specialization in distributedcomputing libraries such as PySpark. Extensive experience with PyTorch (preferred) and/or TensorFlow. Hands-on experience with deploying machine learning models in production using cloud platforms, especially More ❯
data solutions. Experience with cloud-based solutions and agile methodologies like Scrum or Kanban. Nice to have skills: Experience in retail or e-commerce. Knowledge of Big Data and Distributed Computing. Familiarity with streaming technologies like Spark Structured Streaming or Apache Flink. Additional programming skills in PowerShell or Bash. Understanding of Databricks Ecosystem components. Experience with Data Observability or More ❯
London, England, United Kingdom Hybrid / WFH Options
PhysicsX Ltd
problems. Design, build and optimise machine learning models with a focus on scalability and efficiency in our application domain. Transform prototype model implementations to robust and optimised implementations. Implement distributed training architectures (e.g., data parallelism, parameter server, etc.) for multi-node/multi-GPU training and explore federated learning capacity using cloud (e.g., AWS, Azure, GCP) and on-premise … or PhD in computer science, machine learning, applied statistics, mathematics, physics, engineering, software engineering, or a related field, with a record of experience in any of the following: Scientific computing; High-performance computing (CPU/GPU clusters); Parallelised/distributed training for large/foundation models. Ideally >1 years of experience in a data-driven role, with … exposure to: scaling and optimising ML models, training and serving foundation models at scale (federated learning a bonus); distributedcomputing frameworks (e.g., Spark, Dask) and high-performance computing frameworks (MPI, OpenMP, CUDA, Triton); cloud computing (on hyper-scaler platforms, e.g., AWS, Azure, GCP); building machine learning models and pipelines in Python, using common libraries and frameworks More ❯
London, England, United Kingdom Hybrid / WFH Options
Merantix
with a talented team to build and deploy scalable data pipelines to aggregate, prepare, and process data for use with machine learning. Your skills span across data processing and distributed systems with a software engineering base. You are excited to collaborate with ML engineers to build generative AI features in Autodesk products. You will report to Senior Manager, Autodesk … to work remotely, in an office, or a mix of both. Responsibilities Collaborate on engineering projects for product with a diverse, global team of researchers and engineers Develop scalable distributed systems to process, filter, and deploy datasets for use with machine learning Process large, unstructured, multi-modal (text, images, 3D models, code snippets, metadata) data sources into formats suitable … such as AWS, Azure, and GCP Containerization technologies, such as Docker and Kubernetes Documenting code, architectures, and experiments Linux systems and bash terminals Preferred Qualifications Hands-on experience with: Distributedcomputing frameworks, such as Ray Data and Spark. Databases and/or data warehousing technologies, such as Apache Hive. Data transformation via SQL and DBT. Orchestration platforms, such More ❯
Quant and Front Office technology teams to integrate pricing models and workflow enhancements within the ACE application. There will be exposure to a wide range of technological frameworks, including distributedcomputing architecture. The role will involve tasks such as: Developing and maintaining the Counterparty Credit Risk applications, leveraging in-house Python and C++ model libraries. Supporting and improving More ❯
for data augmentation, denoising, and domain adaptation to enhance model performance. 3. Model Training and Optimization: -Design and implement efficient training pipelines for large-scale generative AI models. -Leverage distributedcomputing resources, such as GPUs and cloud platforms, for efficient model training. -Optimize model architectures, hyperparameters, and training strategies to achieve superior performance and generalization. 4. Model Evaluation … experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. … BERT, or T5. - Familiarity with reinforcement learning techniques and their applications in generative AI. - Understanding of ethical AI principles, bias mitigation techniques, and responsible AI practices. - Experience with cloud computing platforms (e.g., AWS, GCP, Azure) and distributedcomputing frameworks (e.g., Apache Spark, Dask). - Strong problem-solving, analytical, and critical thinking skills. - Strong communication, collaboration, and leadership More ❯
London, England, United Kingdom Hybrid / WFH Options
PhysicsX
concepts and best practices (e.g., versioning, testing, CI/CD, API design, MLOps) Building machine learning models and pipelines in Python, using common libraries and frameworks (e.g., TensorFlow, MLFlow) Distributedcomputing frameworks (e.g., Spark, Dask) Cloud platforms (e.g., AWS, Azure, GCP) and HP computing Containerization and orchestration (Docker, Kubernetes) Strong problem-solving skills and the ability to More ❯
London, England, United Kingdom Hybrid / WFH Options
Elanco Tiergesundheit AG
as operational engineering teams. Daily/Monthly Responsibilities: Build and run responsibilities for GenAI ensuring robust support folding into standard incident processes as the products mature. Help work with distributed teams across the business on how to consume AI/ML capabilities. Hands on code, build, govern, and maintain. Working as part of a scrum team, deliver high quality …/GCP Cloud Data Fusion, Microsoft Azure Machine Learning or GCP Cloud ML Engine, Azure Data Lake, Azure Databricks or GCP Cloud Dataproc. Familiarity with big data technologies and distributedcomputing frameworks, such as Hadoop, Spark, or Apache Flink. Experience scaling an “API-Ecosystem”, designing, and implementing “API-First” integration patterns. Experience working with authentication and authorization protocols More ❯
IaC) concepts, and cloud security best practices Proficiency in GCP services, such as Compute Engine, Cloud Storage, BigQuery, Dataflow, and Kubernetes Engine Experience working with large-scale data processing, distributedcomputing, or cloud-based platforms (e.g., Hadoop, Spark, AWS, Azure) is highly desirable Familiarity with ESG investing principles and market trends is a plus Excellent problem-solving, analytical More ❯
language (Python, Java, or Scala) Experience with cloud platforms (AWS, GCP, or Azure) Experience with data warehousing and lake architectures ETL/ELT pipeline development SQL and NoSQL databases Distributedcomputing frameworks (Spark, Kinesis, etc.) Software development best practices including CI/CD, TDD, and version control Strong understanding of data modelling and system architecture Excellent problem-solving More ❯
at least one programming language (Python, Java, or Scala) Extensive experience with cloud platforms (AWS, GCP, or Azure) Experience with: Data warehousing and lake architectures SQL and NoSQL databases Distributedcomputing frameworks (Spark, Kinesis etc) Software development best practices including CI/CD, TDD and version control. Strong understanding of data modelling and system architecture Excellent problem-solving More ❯
Optimise market data pipelines and trade execution engines to improve performance and reduce latency. Ensure system reliability, scalability, and low-latency performance in a fast-paced trading environment. Utilise distributedcomputing and high-performance computing techniques to enhance algorithmic execution. Integrate with exchange APIs (REST/WebSocket/FIX) for real-time data processing and trading execution. … Required Qualifications: Strong understanding of quant trading logic, market structure, and execution strategies. Proficiency in C++ and Python, with experience in high-performance computing, multi-threading, and distributed systems. Experience with algorithmic trading systems in crypto, equities, FX, or derivatives at least 5 years. Knowledge of financial markets, risk management, and portfolio optimisation. Solid understanding of data structures … s, Master’s, or PhD in Computer Science, Mathematics, Engineering, or related fields. Preferred Qualifications: Experience with low-latency trading systems and high-frequency trading (HFT). Background in distributedcomputing, machine learning, or AI-driven trading models. Familiarity with cloud computing, Kubernetes, or containerised environments. Strong debugging, profiling, and performance optimisation skills. What We Offer: Competitive More ❯
application platform development, web and mobile development, cloud, integration, security, etc. " Application dev experience with at least one of the cloud providers - Amazon AWS or MS Azure " Understanding of distributedcomputing paradigm and exposure to building highly scalable systems. " Experience with platform modernization and cloud migration projects " Expertise in Agile development methodologies like TDD, BDD, Performance/Load More ❯
City of London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
Spark, PySpark, TensorFlow . Strong knowledge of LLM algorithms and training techniques . Experience deploying models in production environments. Nice to Have: Experience in GenAI/LLMs Familiarity with distributedcomputing tools (Hadoop, Hive, Spark). Background in banking, risk management, or capital markets . Why Join? This is a unique opportunity to work at the forefront of More ❯
Quant and Front Office technology teams to integrate pricing models and workflow enhancements within the ACE application. There will be exposure to a wide range of technological frameworks, including distributedcomputing architecture. The role will involve tasks such as: Developing and maintaining the Counterparty Credit Risk applications, leveraging in-house Python and C++ model libraries. Supporting and improving More ❯
re comfortable developing or learning to develop custom metrics, identify biases, and quantify data quality. Strong Python skills for Data & Machine Learning, familiarity with PyTorch and TensorFlow. Experience with distributedcomputing and big data - scaling ML pipelines for large datasets. Familiarity with cloud-based deployment (such AWS, GCP, Azure, or Modal). Experience in fast moving AI, ML More ❯
London, England, United Kingdom Hybrid / WFH Options
bigspark
join our team on a permanent basis in a UK remote, work from home capacity. We provide the backbone for modern analytics to our clients through expertise in DevOps, distributedcomputing, machine learning and adoption of proven open source projects. We specialise in backend development, infrastructure automation and performance engineering for data workloads at scale. Role Purpose The More ❯
proactive, engaging approach to working with others and building lasting partnerships. Big Data Technologies: Familiarity with tools such as Kafka, Flink, dbt, and Airflow, with a deep understanding of distributedcomputing and large-scale data processing systems. Nice to Have: Kubernetes Expertise: Experience with Kubernetes, Helm, ArgoCD, and related technologies. Cloud Platform Proficiency: Familiarity with AWS, GCP, or More ❯