be the best data engineers in the world. Profile The Data Solutions Architect specialises in designing and implementing large-scale data processing solutions using distributedcomputing frameworks, AWS native components, and AWS cloud platforms. The are comfortable designing and constructing bespoke solutions and components from scratch to solve … in other settings will always be considered. Key responsibilities of the role are summarised below Design and implement large-scale data processing systems using distributedcomputing frameworks such as Apache Kafka and Apache Spark. Architect cloud-based solutions capable of handling petabytes of data. Lead the automation of … data technologies and cloud platforms, with demonstrable experience on large scale projects. Deep expertise in Java, and Scala, and software engineering. Strong background in distributedcomputing frameworks, and scaled processing. Excellent communication and teamwork skills. Ability to communicate and inspire large teams. more »
agile data warehousing systems on the Azure platform, utilising data lakes, Databricks, and Synapse Analytics. Enhance data workflows for large datasets using Databricks, exploiting distributedcomputing capabilities and coding adaptability. Establish streamlined CI/CD pipelines for release management and contribute to setting team-wide standards, best practices more »
software development principles. Experience applying Machine Learning techniques to infrastructure development, such as optimising data pipelines and building robust systems. Strong familiarity with cloud computing platforms and distributedcomputing frameworks. Passion for technology and a desire to make an impact in the finance industry, even without direct more »
great heights. We encourage you to apply! Role Responsibilities Innovate, design and deliver in terms of high reliability, scalability and extensibility Build large scale distributedcomputing programs to generate insightful analytics Solve unique problems that have a broad impact and delivery of business value Deliver within a team … technologies and knows when/how/if to apply them appropriately Minimum qualifications: Strong academic record and a degree with high mathematical and computing content e.g., Computer Science, Mathematics, Engineering or Physics from a leading university. 5+ years of progressive software engineering experience Expert knowledge of Python and more »
Cambridge, Cambridgeshire, East Anglia, United Kingdom Hybrid / WFH Options
Set2Recruit
PyTorch, JAX, Keras, Tensorflow) Strong theoretical understanding of machine learning and neural networks Experience with containerized processes (Docker, Kubernetes) Familiarity with cloud services and distributedcomputing frameworks (AWS, PySpark, Ray Distributed) Problem-solving aptitude and creative thinking skills Benefits : Work From Home flexibility Employee Assistance Programme Stock more »
Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributedcomputing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for analytical purposes MSc or PhD in … Data Science or an analytical subject (Physics, Mathematics, Computing) or other quantitative discipline The position is based close to Manchester. The salary for this position will be circa £75K - £85K plus benefits. Please send your CV to us in Word format along with salary and availability details. more »
Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributedcomputing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for analytical purposes An MSc or PhD … in Data Science or an analytical subject (Physics, Mathematics, Computing) or other quantitative discipline would be handy. The position is based close to Manchester. The salary for this Big Data Scientist position will be circa £75K - £85K plus benefits. Please send your CV to us in Word format along more »
Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributedcomputing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for analytical purposes An MSc or PhD … in Data Science or an analytical subject (Physics, Mathematics, Computing) or other quantitative discipline would be handy. The position is based in the Docklands London. This is a 3 to 6 month contract assignment. Please send your CV to us in Word format along with daily rate and availability more »
/Monthly Responsibilities Build and run responsibilities for GenAI ensuring robust support folding into standard incident processes as the products mature. Help work with distributed teams across the business on how to consume AI/ML capabilities. Hands on code, build, govern and maintain . Working as part of … Azure Machine Learning or GCP Cloud ML Engine , Azure Data Lake , Azure Databricks or GCP Cloud Dataproc . Familiarity with big data technologies and distributedcomputing frameworks, such as Hadoop, Spark, or Apache Flink. Experience scaling an “API-Ecosystem ”, designing, and implementing “API-First” integration patterns. Experience working more »
FX You have a strong knowledge of Linux OS/Systems Administration You have a good understanding of computer architecture, databases, real-time systems, distributedcomputing, OOP, Data Structures, Design Patterns, Algorithms You're collaborative with excellent English language communication skills You are degree educated, likely to MSc more »
FX You have a strong knowledge of Linux OS/Systems Administration You have a good understanding of computer architecture, databases, real-time systems, distributedcomputing, OOP, Data Structures, Design Patterns, Algorithms You're collaborative with excellent English language communication skills You are degree educated, likely to MSc more »
record leading complex ETL and Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributedcomputing frameworks such as Apache Spark Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin Deep knowledge of more »
sales experience and have involved in RFP/RFI/RFQ processes Creative problem-solver with strong communication skills Excellent understanding of traditional and distributedcomputing paradigm Should have excellent knowledge in data warehouse/data lake technology and business intelligence concepts Should have good knowledge in Relational more »
frameworks like Pandas and PySpark. Architectural Proficiency: Deep understanding of data architecture, modelling, and warehousing concepts, coupled with experience in big data technologies and distributed computing. Cloud and Tools Aptitude: Familiarity with cloud platforms including Snowflake, AWS services and continuous integration tooling, enhancing your ability to deliver cutting-edge more »
with good understanding of functional programming paradigm 2+ years experience in unit testing and test-driven development 5+ years experience with multi-threading and distributedcomputing 5+ years of in depth understanding of Java Performance tuning and GC optimizations. Experience in Financial Services/Banking environment Agile/ more »
robust data pipelines to assist analysis for large datasets Develop infrastructure for trading services to convert research ideas into production Experience with large-scale distributedcomputing technologies Requirements Bachelor’s degree or higher in computer science or other quantitative discipline 3+ years of Modern C++ experience required. (Post more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributedcomputing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to more »
Newcastle Upon Tyne, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributedcomputing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to more »
cloud infrastructure (AWS being the preference not essential) Start-up experience is preferred but not essential. The extras: Experience working on AI-based products. Distributedcomputing experience (Spark, MPI, etc) Experience orchestrating workflows, particularly within distributed system environments. Knowledge of MLOps principles and practices, especially in implementing more »
Platform Engineering Consultant. Skilled working with core solutions and offerings of at least one of the major public cloud providers (AWS, GCP or Azure) Distributed Systems Experience: You have played a primary role in architecting and developing complex distributed architectures. You understand and can reason about core distributedcomputing principles and theory, and can recognise patterns and approaches taken by different technologies and tools in this space. Continuous Delivery Experience : You know all the pieces of the puzzle that fit together to make a secure, repeatable and scalable continuous delivery pipeline. Everything from secrets management to more »
s global R&D activity, conducts research and innovation – achieving together whilst supporting our employees. We are currently looking for a researcher in our Computing Research Group, working on exploring how the use High Performance Computing to accelerate applications at the intersection of Artificial Intelligence and Genomics. Your … role will involve: Analyzing and profiling code to find hotspots Using parallel and distributedcomputing techniques to accelerate algorithms, in particular those dealing with large scale graph computations Optimization for heterogeneous (CPU + GPU) architectures or for Fujitsu’s Arm-based A64FX processor Your experience To be suitable … approach, adaptable and able to demonstrate that you have: A PhD or similar Post-Graduate qualification in a relevant field Experience with High Performance Computing techniques such as MPI, OpenMP, Intel TBB etc High proficiency in one or more programming languages A track record of writing academic publications Excellent more »
inference of these models. You will develop the core inference engine to seamlessly deploy large machine learning models to customers at scale and across distributed systems, contributing significantly to the automated pipeline, optimizing for high throughput training runs and rapid experimentation while achieving top hardware efficiency. Please note that … professional experience in a similar capacity. Qualifications: We are seeking candidates with exceptional ML engineering evidenced by: Experience in creating and managing high-performance computing clusters across GPU/TPU, preferably in PyTorch. Proficiency in efficient serving of large machine learning models at scale, including quantization and distributedcomputing, leveraging libraries such as deepspeed. Strong software engineering acumen with expertise in software design/architecture, particularly in Python. Understanding of the latest AI research and ability to efficiently implement these systems. Prior experience at a leading machine learning company (OpenAI, DeepMind, Meta, Anthropic, HuggingFace, etc.). more »
Exeter, Devon, United Kingdom Hybrid / WFH Options
Tec Partners
ethical AI solutions Excellent communication skills, capable of simplifying complex technical concepts Experience implementing cutting-edge NLP research and working with large datasets and distributedcomputing frameworks Familiarity with Azure, AWS, or GCP Startup Mindset: To thrive in our environment, you should be proactive, adaptable to change, and more »
and warehousing principles that underpin most data platforms. Ability to ingest, transform, integrate data from diverse sources into well-structured information assets. Knowledge of distributedcomputing techniques like parallel processing, streaming, batch workflow orchestration that enable handling large data volumes. Experience with ETL, data pipelines, and building automated … high-performance data solutions. Data modeling, warehouse design, database optimization knowledge - with samples of logical/physical models that reflect proficiency. Deploying, and managing distributed data systems. Ability to monitor, troubleshoot, and tune these systems for reliability and performance. Coding experience that demonstrates modularity, reusability, and efficiency - across languages. more »
Greater London, England, United Kingdom Hybrid / WFH Options
ManpowerGroup
forward in endlessly different directions. We are looking for world-class Site Reliability Engineer with experience in developing processes, tools and automation for managing distributed systems in production environments. Our team combines software and systems engineering with system administration practices to develop creative engineering solutions to operations problems. We … infrastructure and application layers. Demonstrated automation skills, showcasing how you've applied automation to solve problems and reduce manual effort and activity. Knowledge of distributedcomputing and cloud-native applications, including proficiency in AWS, Terraform, ELK stack (including monitoring tools as mentioned), PagerDuty/OpsGenie or similar, and more »