review systems, and are capable of producing high-quality prototypes and production code. They have experience running models on cloud hardware and parallelizing data and models across accelerators. Data engineering capabilities: The candidate is experienced … in building ML datapipelines for training and evaluating deep learning models, including raw data analysis, dataset management, and scalable pipeline construction. Passion for optimization: They possess in-depth knowledge of ML libraries, hardware interactions, and optimization techniques for model training, inference speed, and validation More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Enigma
review systems, and are capable of producing high-quality prototypes and production code. They have experience running models on cloud hardware and parallelizing data and models across accelerators. Data engineering capabilities: The candidate is experienced … in building ML datapipelines for training and evaluating deep learning models, including raw data analysis, dataset management, and scalable pipeline construction. Passion for optimization: They possess in-depth knowledge of ML libraries, hardware interactions, and optimization techniques for model training, inference speed, and validation More ❯
decisions about the technological stack and system design, considering all trade-offs. Work on ML model development with our team to build robust datapipelines and deploy ML models. Analyse large corpora of data and build real-time recommendation systems. Scale backend and web frontend development … teams. Deliver numerous new, useful, and interesting features. Work with large amounts of data in different modalities: Texts Videos Images Audio Work in a very cross-functional team, side by side with mobile developers, product managers, ML engineers, and designers. Skills needed: Build high-load applications from scratch. More ❯
decisions about the technological stack and system design, considering all trade-offs. Work on ML model development with our team to build robust datapipelines and deploy ML models. Analyse large corpora of data and build real-time recommendation systems. Scale backend and web frontend development … teams. Deliver numerous new, useful, and interesting features. Work with large amounts of data in different modalities: texts, videos, images, audio. Lead the full-stack development of a high-scale system designed for millions of users. Work in a very cross-functional team, side by side with mobile More ❯
Development, deploy and maintain AI systems and services in the healthcare domain to find more untreated patients London Technical share this Pangaea Data (Pangaea) is a South San Francisco and London based business founded by Dr Vibhor Gupta and Prof Yike Guo (Director Data Science Institute … and the life sciences, including Lord David Prior (former chairman, NHS England) and Mr. Andy Palmer (former CIO, Novartis). The Role Pangaea Data is looking for talented engineers to join its technical team and focus on the development, deployment, and maintenance of AI systems and services in … AI-based solutions. Key Responsibilities Technical Responsibilities : Design and develop backend systems that integrate healthcare AI/LLM modules for tasks such as data extraction, summarization or decision support. Design and implement systems that integrate with the client side such as Web UI or EHR system. Implement and More ❯
Financial engineering or related finance-related degree. Experience with equity or fixed income derivatives volatility models for risk and trading systems. Experience with data engineering frameworks like Airflow, Luigi or Dagster. Experience developing financial model libraries in Python, Java, or C++ Experience with risk and optimization tools such … assets and their markets, both spot & derivatives products. Responsibilities: Building quantitative models for risk management, asset valuation, derivatives pricing, and simulations. Collaborating with data engineers and infrastructure team to incorporate quantitative functions into datapipelines and services. Automating number & regression testing and backtests to ensure top More ❯
Certain Advantage are recruiting on behalf of our growing London based Data & AI consulting client who specialist in providing best in class support to clients who use Palantir's Foundry, Gotham, and AIP as part of their architecture. A large proportion of projects involve solving problems in the … defence domain, so SC clearance is required, and we can only consider candidates eligible and willing to obtain this. We're looking for data professionals who get excited about engaging with users to understand and break down complex novel problems, and design and deliver data solutions … frameworks like React would also be advantageous. As a Consultant Engineer your responsibilities will include Assist in the design, development, and maintenance of datapipelines and ETL processes to build data and action models to address the workflow needs. Build and edit operational workflows, including front More ❯
a fast-growing, friendly startup backed by £25M Series A funding, with Series B on the horizon. We're looking for a ML Data Engineer to take ownership of datapipelines and support the growing use of AI and LLMs. You’ll be hands-on with … AWS services (S3, Glue, Redshift, Athena etc..) and play a key role in managing and evolving our data infrastructure. Job Title: Data Engineer – AWS & LLM (Hybrid, London) Salary: £60,000 – £65,000 Location: Hybrid – 3 days/week in London office (5 min walk from London … Bridge Station) Key Responsibilities: Manage end-to-end data flow across the company Design and maintain ETL processes using AWS Support LLM and RAG architecture integration Write clean code and conduct peer reviews Collaborate across teams to drive business value through data Requirements: AWS: S3, Glue More ❯
london, south east england, United Kingdom Hybrid / WFH Options
WeDoData
a fast-growing, friendly startup backed by £25M Series A funding, with Series B on the horizon. We're looking for a ML Data Engineer to take ownership of datapipelines and support the growing use of AI and LLMs. You’ll be hands-on with … AWS services (S3, Glue, Redshift, Athena etc..) and play a key role in managing and evolving our data infrastructure. Job Title: Data Engineer – AWS & LLM (Hybrid, London) Salary: £60,000 – £65,000 Location: Hybrid – 3 days/week in London office (5 min walk from London … Bridge Station) Key Responsibilities: Manage end-to-end data flow across the company Design and maintain ETL processes using AWS Support LLM and RAG architecture integration Write clean code and conduct peer reviews Collaborate across teams to drive business value through data Requirements: AWS: S3, Glue More ❯
passion for deploying AI/ML models into real-world, production-grade applications. Apply if: You have strong foundational software engineering knowledge, including data structures, algorithms, system design, and OOP. You have advanced knowledge of LLM architectures and ML/DL frameworks (e.g. TensorFlow, PyTorch, LangChain, Keras, scikit … and cost-effectiveness in a production environment. Implement and manage the infrastructure for MLOps, including fine-tuning, deployment, monitoring and versioning. Develop robust datapipelines for ingestion, cleaning, model training, and continuous deployment. Build retrieval-aware repositories for model training, evaluation, and real-time context-rich inference. Collaborate More ❯
passion for deploying AI/ML models into real-world, production-grade applications. Apply if: You have strong foundational software engineering knowledge, including data structures, algorithms, system design, and OOP. You have advanced knowledge of LLM architectures and ML/DL frameworks (e.g. TensorFlow, PyTorch, LangChain, Keras, scikit … and cost-effectiveness in a production environment. Implement and manage the infrastructure for MLOps, including fine-tuning, deployment, monitoring and versioning. Develop robust datapipelines for ingestion, cleaning, model training, and continuous deployment. Build retrieval-aware repositories for model training, evaluation, and real-time context-rich inference. Collaborate More ❯
Job Description: A Data Engineer specializing in Python and PySpark to be part of data efforts from on-premises systems to AWS. The candidate will design, build, and optimize datapipelines using Python, PySpark, and DBT onto Snowflake on AWS, ensuring a secure and … efficient framework. This role requires expertise in Python, PySpark, DBT, Airflow, and hands-on experience with exposure to Snowflake and the Big Data tech stack Technical Skills: Python PySpark DBT Airflow AWS, especially S3 and Glue Apache Iceberg SQL Git Kubernetes and Docker Shell Snowflake and Big DataMore ❯
Job Description: A Data Engineer specializing in Python and PySpark to be part of data efforts from on-premises systems to AWS. The candidate will design, build, and optimize datapipelines using Python, PySpark, and DBT onto Snowflake on AWS, ensuring a secure and … efficient framework. This role requires expertise in Python, PySpark, DBT, Airflow, and hands-on experience with exposure to Snowflake and the Big Data tech stack Technical Skills: Python PySpark DBT Airflow AWS, especially S3 and Glue Apache Iceberg SQL Git Kubernetes and Docker Shell Snowflake and Big DataMore ❯
My client is a quantitative hedge fund: they combine deep expertise in trading, technology, and data science to drive uncorrelated, high-quality investment returns. Their team of software developers and engineers work globally to deliver next-gen data pipelines. If you're a skilled Python developer … of the best quants, traders, and engineers in the industry. At the cutting-edge of technology, they are building high-performance, large-scale data analysis pipelines. A data-driven culture, leveraging advanced statistical and machine learning techniques. Global collaboration with offices in major financial hubs worldwide More ❯
london (city of london), south east england, United Kingdom
Stanford Black Limited
My client is a quantitative hedge fund: they combine deep expertise in trading, technology, and data science to drive uncorrelated, high-quality investment returns. Their team of software developers and engineers work globally to deliver next-gen data pipelines. If you're a skilled Python developer … of the best quants, traders, and engineers in the industry. At the cutting-edge of technology, they are building high-performance, large-scale data analysis pipelines. A data-driven culture, leveraging advanced statistical and machine learning techniques. Global collaboration with offices in major financial hubs worldwide More ❯
is built with .NET (C#) on the backend and React on the frontend, hosted in AWS and handling workflows that include external integrations, datapipelines, batch processing, and partner APIs. We're transitioning from a fully offshore model to a hybrid delivery team. As part of that, we … UK-based developers to take deeper ownership of delivery, quality, and technical standards. We operate in a regulated sector where security, auditability, and data integrity are non-negotiable. At the same time, we avoid unnecessary overhead — processes exist to support delivery, not get in the way. What You … Developing backend services in .NET (C#) for a live, production web application Designing and maintaining APIs, business logic, and batch processes Supporting internal datapipelines and working with third-party data sources Writing clean, well-tested code and participating in structured peer reviews Working alongside a More ❯
Graduate Software Engineer (Data & Infrastructure) London, United Kingdom At Modo Energy, we're on a mission to build the information architecture for the energy transition - we want to be the only place to come to for information on the global journey to net zero. Take a look at … of a rapidly growing team. The role As a software engineer you will be trusted with responsibility day one, adding value to our data and infrastructure team. You will be responsible for developing new features and ensuring they are scalable, reliable, and efficient to meet future demands. We … hungry to learn quickly. This position requires a proactive approach and readiness to introduce modern practices to development, monitoring, maintaining, and improving our datapipelines and infrastructure. We offer multiple career progression paths based on your interests and experience, whether that be in data engineering or More ❯
Python Engineer – Distributed Systems & Big Data Location: London (Hybrid or Remote within the UK) Type: Full-time | Competitive Salary + Equity Options We’re looking for a talented Python Engineer to join one of our premier clients based out of London. You’ll play a key role in … next-generation platform. What You’ll Do: Design, build, and optimise distributed systems for high throughput and low latency. Work with large-scale datapipelines (terabytes to petabytes). Collaborate closely with product and data teams to solve complex engineering challenges. Write clean, maintainable Python code … Python programming skills (5+ years preferred). Deep experience with distributed systems (e.g., Kafka, Spark, Ray, Kubernetes). Hands-on work with big data technologies and architectures. Solid understanding of concurrency, fault tolerance, and data consistency. Comfortable in a fast-paced, highly collaborative environment. Bonus Points More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Oliver Bernard
Python Engineer – Distributed Systems & Big Data Location: London (Hybrid or Remote within the UK) Type: Full-time | Competitive Salary + Equity Options We’re looking for a talented Python Engineer to join one of our premier clients based out of London. You’ll play a key role in … next-generation platform. What You’ll Do: Design, build, and optimise distributed systems for high throughput and low latency. Work with large-scale datapipelines (terabytes to petabytes). Collaborate closely with product and data teams to solve complex engineering challenges. Write clean, maintainable Python code … Python programming skills (5+ years preferred). Deep experience with distributed systems (e.g., Kafka, Spark, Ray, Kubernetes). Hands-on work with big data technologies and architectures. Solid understanding of concurrency, fault tolerance, and data consistency. Comfortable in a fast-paced, highly collaborative environment. Bonus Points More ❯
Full-time £70k WeDo has partnered with a Fintech Scale-Up on the hunt for an Analytics Engineer who knows their way around datapipelines, loves clean code, and isn’t afraid of the occasional rogue null value. You’ll join a fast-paced fintech team helping them … turn messy data into insights people can actually use. What you’ll do: Build and maintain data models using dbt and Coalesce Write clean, efficient SQL and Python Work with analysts and stakeholders to turn questions into answers Keep their datapipelines flowing smoothly … and reliably What They're looking for: Strong SQL skills Solid experience with dbt , Python , and ideally Coalesce Background in data or analytics engineering Bonus points if you’ve worked in financial services Good communicator, team player, and general data nerd What’s in it for More ❯
Milton Keynes, Buckinghamshire, South East, United Kingdom
IMSERV EUROPE LIMITED
IMServ is one of the UKs leading data collection and energy metering experts, delivering award winning services to more customers in more places, meeting industry targets and becoming a benchmark for excellence. We offer a range of specialist metering technology for electricity, gas and water along with highly … accurate energy data collection services. All this is wrapped up with an easy-to-view online data management analysis and reporting software. PURPOSE OF ROLE: Software Engineers are the main creators of our new line of software products, helping us achieve our ambition to deliver highly … the code throughout the entire product lifecycle, building an understanding of the market intent and tuning our outputs in conjunction with customer usage data to ensure success Senior Engineers also undertake more complex works including technical design, plus coach and guide more junior team members Our Team Leaders More ❯
IMServ is one of the UK s leading data collection and energy metering experts, delivering award winning services to more customers in more places, meeting industry targets and becoming a benchmark for excellence. We offer a range of specialist metering technology for electricity, gas and water along with … highly accurate energy data collection services. All this is wrapped up with an easy-to-view online data management analysis and reporting software. PURPOSE OF ROLE: Software Engineers are the main creators of our new line of software products, helping us achieve our ambition to deliver … the code throughout the entire product lifecycle, building an understanding of the market intent and tuning our outputs in conjunction with customer usage data to ensure success Senior Engineers also undertake more complex works including technical design, plus coach and guide more junior team members Our Team Leaders More ❯
Senior Software Engineer | Tech for good | C# .Net Do you want to join a pioneering software house dedicated to transforming industries through data analysis and management? They harness cutting-edge solutions to enable businesses to harness the power of data for the greater good of humanity. … development team. In this role, you will design, develop, and maintain scalable, high-performance backend systems. You will work on integrating systems managing datapipelines, and ensuring system reliability and security in a cloud environment. Key Responsibilities You will play a key role in designing, developing, and deploying … high-performance software solutions Manage and wrangle large scale data sets Optimise cloud based architecture Deliver value at every point of the SDLC Required Skills & Qualifications 4+ years of professional experience in C# .NET Experience with cloud platforms such as AWS, Azure, or Google Cloud Experience in dataMore ❯
Senior Software Engineer | Tech for good | C# .Net Do you want to join a pioneering software house dedicated to transforming industries through data analysis and management? They harness cutting-edge solutions to enable businesses to harness the power of data for the greater good of humanity. … development team. In this role, you will design, develop, and maintain scalable, high-performance backend systems. You will work on integrating systems managing datapipelines, and ensuring system reliability and security in a cloud environment. Key Responsibilities You will play a key role in designing, developing, and deploying … high-performance software solutions Manage and wrangle large scale data sets Optimise cloud based architecture Deliver value at every point of the SDLC Required Skills & Qualifications 4+ years of professional experience in C# .NET Experience with cloud platforms such as AWS, Azure, or Google Cloud Experience in dataMore ❯
team's mission-real world incarnations of some of the most difficult academic problems. An Applied Scientist in the team drives innovation through data-driven insights building advanced datapipelines and applying machine learning techniques such as computer vision, LLM and foundation models. An Applied Scientist … Key job responsibilities - Work on research projects in machine learning and related fields to create highly innovative customer experiences; - Analyze large amounts of data to discover patterns, find opportunities, and develop highly innovative, scalable algorithms to seize these opportunities; - Validate models via statistically rigorous experiments across millions of … peer-reviewed conferences or journals - Experience programming in Java, C++, Python or related language - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing - Experience building machine learning models or developing algorithms for More ❯