DW space (map/reduce, columnar DBs etc.,). You should be an expert with Data warehouse architectures, data modeling, and data processing using distributed compute solutions. The role will involve leading the core global engineering team of 4-5 resources based in India to develop industry leading solutions. more »
Cambridge, England, United Kingdom Hybrid / WFH Options
Intrasonics, an Ipsos Company
trying to improve its workflow, and we are not afraid of investing in new ideas. If you like to design, develop, and deploy resilient distributed systems you will fit right in. You will have the chance to work on established projects, kickstart new ones, and make a case for … tools. Experience of cloud-based services (AWS, Google Cloud). Experience of user interface and experience design. Experience with microservices architecture Knowledge of serverless computing architectures Knowledge of CI/CD workflows. Knowledge of issue/project tracking software, e.g. Jira Knowledge of container and orchestration technologies (e.g. Docker … Kubernetes, LXC) Other technologies associated with multi-tier architectures, and distributed computing. E.g. Redis, RabbitMQ, Prometheus, Apache Kafka, ELK, load balancers, ... What is in it for me? Ipsos UK offer an attractive basic salary and a rewards package including 25 days annual leave, a pension scheme and a more »
record leading complex ETL and Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributedcomputing frameworks such as Apache Spark Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin Deep knowledge of more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributedcomputing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to more »
Newcastle Upon Tyne, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributedcomputing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to more »
inference of these models. You will develop the core inference engine to seamlessly deploy large machine learning models to customers at scale and across distributed systems, contributing significantly to the automated pipeline, optimizing for high throughput training runs and rapid experimentation while achieving top hardware efficiency. Please note that … professional experience in a similar capacity. Qualifications: We are seeking candidates with exceptional ML engineering evidenced by: Experience in creating and managing high-performance computing clusters across GPU/TPU, preferably in PyTorch. Proficiency in efficient serving of large machine learning models at scale, including quantization and distributedcomputing, leveraging libraries such as deepspeed. Strong software engineering acumen with expertise in software design/architecture, particularly in Python. Understanding of the latest AI research and ability to efficiently implement these systems. Prior experience at a leading machine learning company (OpenAI, DeepMind, Meta, Anthropic, HuggingFace, etc.). more »
Central London, London, United Kingdom Hybrid / WFH Options
Intellect UK Group Limited
etc.). Experience working with financial data, trading systems, or risk management tools is highly desirable. Knowledge of database systems (e.g., SQL, NoSQL) and distributedcomputing frameworks (e.g., Spark) is a plus. Excellent problem-solving skills, attention to detail, and ability to thrive in a fast-paced, collaborative more »
be the best data engineers in the world. Profile The Data Solutions Architect specialises in designing and implementing large-scale data processing solutions using distributedcomputing frameworks, AWS native components, and AWS cloud platforms. The are comfortable designing and constructing bespoke solutions and components from scratch to solve … in other settings will always be considered. Key responsibilities of the role are summarised below Design and implement large-scale data processing systems using distributedcomputing frameworks such as Apache Kafka and Apache Spark. Architect cloud-based solutions capable of handling petabytes of data. Lead the automation of … data technologies and cloud platforms, with demonstrable experience on large scale projects. Deep expertise in Java, and Scala, and software engineering. Strong background in distributedcomputing frameworks, and scaled processing. Excellent communication and teamwork skills. Ability to communicate and inspire large teams. more »
frameworks like Pandas and PySpark. Architectural Proficiency: Deep understanding of data architecture, modelling, and warehousing concepts, coupled with experience in big data technologies and distributed computing. Cloud and Tools Aptitude: Familiarity with cloud platforms including Snowflake, AWS services and continuous integration tooling, enhancing your ability to deliver cutting-edge more »
with good understanding of functional programming paradigm 2+ years experience in unit testing and test-driven development 5+ years experience with multi-threading and distributedcomputing 5+ years of in depth understanding of Java Performance tuning and GC optimizations. Experience in Financial Services/Banking environment Agile/ more »
robust data pipelines to assist analysis for large datasets Develop infrastructure for trading services to convert research ideas into production Experience with large-scale distributedcomputing technologies Requirements Bachelor’s degree or higher in computer science or other quantitative discipline 3+ years of Modern C++ experience required. (Post more »
cloud infrastructure (AWS being the preference not essential) Start-up experience is preferred but not essential. The extras: Experience working on AI-based products. Distributedcomputing experience (Spark, MPI, etc) Experience orchestrating workflows, particularly within distributed system environments. Knowledge of MLOps principles and practices, especially in implementing more »
a core focus on increasing developer productivity and overall developer experience. 💡 What You Need: Strong SWE skills - Python or Golang preferred. Expert knowledge of DistributedComputing technologies - Kubernetes strongly preferred. Competent Front-End chops - React & JavaScript preferred - open to similar frameworks/tech. Strong Automation & Config management tooling more »
FX You have a strong knowledge of Linux OS/Systems Administration You have a good understanding of computer architecture, databases, real-time systems, distributedcomputing, OOP, Data Structures, Design Patterns, Algorithms You're collaborative with excellent English language communication skills You are degree educated, likely to MSc more »
software development principles. Experience applying Machine Learning techniques to infrastructure development, such as optimising data pipelines and building robust systems. Strong familiarity with cloud computing platforms and distributedcomputing frameworks. Passion for technology and a desire to make an impact in the finance industry, even without direct more »
/Monthly Responsibilities Build and run responsibilities for GenAI ensuring robust support folding into standard incident processes as the products mature. Help work with distributed teams across the business on how to consume AI/ML capabilities. Hands on code, build, govern and maintain . Working as part of … Azure Machine Learning or GCP Cloud ML Engine , Azure Data Lake , Azure Databricks or GCP Cloud Dataproc . Familiarity with big data technologies and distributedcomputing frameworks, such as Hadoop, Spark, or Apache Flink. Experience scaling an “API-Ecosystem ”, designing, and implementing “API-First” integration patterns. Experience working more »
components for both live trading and simulation Developing a seamless platform to handle all aspects of quant trading—model building, optimization, and trade execution Distributedcomputing Maintaining and updating the platform, ensuring its stability, robustness, and security Troubleshooting and resolving any systems related issues and handling the release more »
Machine Learning Infrastructure Engineer to join their founding team. They are looking for people with skills directly linked to creating and managing high-performance computing (HPC) clusters across GPU/TPU chips + serving large machine learning models at scale. Not 1-10 GPU's but 1000's. You … researchers, founders and advisors to develop the next generation of high availability LLM’s. ML engineering experience: Experienced in creating and managing high-performance computing clusters across GPU/TPU, preferably in PyTorch. Proficient in efficient serving of large machine learning models at scale, including quantisation and distributedcomputing, experience with libraries such deepspeed. Strong software engineering experience in Python. Understanding around the latest AI research. Your background: Worked at a leading machine learning company. Worked a fast growing start-up. Pleas submit your CV to find out more. more »
Greater London, England, United Kingdom Hybrid / WFH Options
NetMind.AI
time, Onsite, 5 days a week; remote working arrangements may be discussed with the line manager. About Us NetMind is a cutting-edge, massively distributedcomputing platform designed for AI modelling and applications. Currently, we are running a start-up project within the life sciences sector, NetMind.life, with more »
higher in computer science or other related engineering fields Plusses: • Experience with React • Experience with MongoDB • Experience working on streaming technologies like Kaftka and distributed technologies like Apache Ignite • Experience working on AWS, GCP, Kubernetes, IaC • Experience working with C# • Financial industry experience • Experience with cloud like AWS, GCP … and data engineers across the firm in delivering data platforms and pipelines for the loans team. The build of this platform would be using distributed compute, distributed micro-services, multi-threating and multi-processing. The candidate will be expected to be heavily hands on coding on a daily more »
on industry best practices. Nice to Haves: Experience as a Site Reliability Engineer or with high-availability systems. Background in production infrastructure and troubleshooting distributed systems. Familiarity with mobile development and distributed computing. What You'll Get: £90,000 base salary. 12% Bonus. $100,000 Shares. Flexible working more »
Platform Engineering Consultant. Skilled working with core solutions and offerings of at least one of the major public cloud providers (AWS, GCP or Azure) Distributed Systems Experience: You have played a primary role in architecting and developing complex distributed architectures. You understand and can reason about core distributedcomputing principles and theory, and can recognise patterns and approaches taken by different technologies and tools in this space. Continuous Delivery Experience : You know all the pieces of the puzzle that fit together to make a secure, repeatable and scalable continuous delivery pipeline. Everything from secrets management to more »
s global R&D activity, conducts research and innovation – achieving together whilst supporting our employees. We are currently looking for a researcher in our Computing Research Group, working on exploring how the use High Performance Computing to accelerate applications at the intersection of Artificial Intelligence and Genomics. Your … role will involve: Analyzing and profiling code to find hotspots Using parallel and distributedcomputing techniques to accelerate algorithms, in particular those dealing with large scale graph computations Optimization for heterogeneous (CPU + GPU) architectures or for Fujitsu’s Arm-based A64FX processor Your experience To be suitable … approach, adaptable and able to demonstrate that you have: A PhD or similar Post-Graduate qualification in a relevant field Experience with High Performance Computing techniques such as MPI, OpenMP, Intel TBB etc High proficiency in one or more programming languages A track record of writing academic publications Excellent more »
and warehousing principles that underpin most data platforms. Ability to ingest, transform, integrate data from diverse sources into well-structured information assets. Knowledge of distributedcomputing techniques like parallel processing, streaming, batch workflow orchestration that enable handling large data volumes. Experience with ETL, data pipelines, and building automated … high-performance data solutions. Data modeling, warehouse design, database optimization knowledge - with samples of logical/physical models that reflect proficiency. Deploying, and managing distributed data systems. Ability to monitor, troubleshoot, and tune these systems for reliability and performance. Coding experience that demonstrates modularity, reusability, and efficiency - across languages. more »
areas: transactional and/or analytical database management systems, query processing and optimisation, storage engines, indexing engines, concurrent/parallel algorithms and data structures, distributedcomputing, parallel programming frameworks, benchmarking and performance analysis, graph theory and graph algorithm design, programming/query languages, computer architecture, vectorised processing, operating … design. Have participated in the implementation of (aspects of) a database management system or systems of a similar low-level nature (e.g., operating systems, distributed workflow systems, compilers). Have published papers at top peer-reviewed conferences or journals in fields related to the above (desired but not essential more »