Lead | £100,000 + 20% Bonus | Hybrid - 2 Days per Week in London 💰 £100,000 + 20% Bonus 📍 London - 2 days a week in office 🛠️ AWS, Python, Terraform, Kubernetes, RAG/GenAI interest Here at Burns Sheehan, we're exclusively partnered with a leading UK fintech that's transforming small business lending - processing billions in decisions annually. They've built … CloudFormation) and cloud-native development. Python scripting proficiency at platform engineer level but an understanding of software development lifecycle and production system ownership. Interest or experience in GenAI infrastructure - RAG, vector databases, LLM platforms, or agent frameworks. Strong communication skills and ability to translate technical decisions into business outcomes. This role is NOT an ML Engineering position - you won't More ❯
external enablement activities. Preferred qualifications Experience with full-stack ML engineering to seamlessly combine retrieval-based knowledge and generative text generation to implement and optimize RAG models using first-party and OSS models. Experience with implementing search concepts, such as indexing, scoring, relevancy, faceting, and query rewriting and expansion. Experience with semantic search frameworks and tools More ❯
AI Product Manager (RAG experience essential) - £650 per day - Outside IR35 Are you excited by the challenge of turning cutting-edge AI into real, working products? We’re looking for an experienced Product Manager to help define, deliver, and scale AI Minimum Viable Products (MVPs) for a major enterprise client - applying RAG, LLMs, and generative AI to automate workflows, enhance … high-value AI opportunities and shape product roadmaps. Translate requirements into actionable user stories and acceptance criteria for engineering and data teams. Collaborate closely with teams building solutions using RAG pipelines, LLMs, embeddings, and API integrations. Define and track KPIs such as accuracy, latency, adoption, and ROI — iterating based on real-world feedback. Ensure all solutions meet enterprise-level standards … About You Proven experience as a Product Manager in AI, data, or enterprise technology environments. Hands-on exposure to AI MVPs, LLM applications, or applied ML use cases (e.g. RAG, copilots, generative search). Strong technical grounding in vector search, embeddings, APIs, data pipelines, and prompt design. Skilled in navigating complex enterprise settings and managing senior stakeholders. Pragmatic and delivery More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Intelix.AI
/CDC) and entity resolution for graph population Author complex queries (Cypher, GSQL, AQL, SPARQL etc. depending on stack) Integrate knowledge graph retrieval & reasoning into LLM/RAG/GraphRAG systems Develop and evaluate graph ML/embedding models (link prediction, anomaly detection) Optimize graph performance, scaling, and query efficiency Liaise with client stakeholders: translate business problems into … TigerGraph, ArangoDB, OrientDB, or Stardog Proficiency in query languages (Cypher, GSQL, AQL, SPARQL, etc.) Strong background in pipelines, ETL, and entity resolution Exposure to integrating KG + LLM or RAG architectures Experience with graph algorithms, embeddings, or GNNs Cloud & production engineering literacy (AWS/Azure/GCP, containerization, CI/CD) Excellent communication skills — able to explain complex graph/ More ❯
/CDC) and entity resolution for graph population Author complex queries (Cypher, GSQL, AQL, SPARQL etc. depending on stack) Integrate knowledge graph retrieval & reasoning into LLM/RAG/GraphRAG systems Develop and evaluate graph ML/embedding models (link prediction, anomaly detection) Optimize graph performance, scaling, and query efficiency Liaise with client stakeholders: translate business problems into … TigerGraph, ArangoDB, OrientDB, or Stardog Proficiency in query languages (Cypher, GSQL, AQL, SPARQL, etc.) Strong background in pipelines, ETL, and entity resolution Exposure to integrating KG + LLM or RAG architectures Experience with graph algorithms, embeddings, or GNNs Cloud & production engineering literacy (AWS/Azure/GCP, containerization, CI/CD) Excellent communication skills — able to explain complex graph/ More ❯
Do you want to build the infrastructure powering the next generation of AI agents? Have you scaled backend systems that drive automation at speed? Ready to join a profitable AI startup transforming how industries deploy technology? A profitable, fast-growing AI company is building the “roads” for AI agents — providing the infrastructure that lets businesses deploy and integrate …/LLM components into production systems. Role Breakdown: 50% Backend Engineering: FastAPI, Flask, Node.js, CI/CD 30% Data Engineering: ETL, DBT, Airflow 20% AI/LLM Integration: LangChain, RAG pipelines, orchestration Key Responsibilities: Design and build backend services to support AI agent deployment Develop scalable data pipelines and integration layers Implement AI/LLM-powered features with LangChain and … of Front-End, and adoption engineers Reporting Line: CTO Visa: Cannot sponsor Ideal Profile: 5–8 years’ experience in backend or data engineering (Python) Exposure to AI/LLMs, RAG, or personal AI side projects Experience in startups or scale-ups where you’ve helped build or grow systems Solid education and strong coding fundamentals Comfortable working in a small More ❯
Do you want to build the infrastructure powering the next generation of AI agents? Have you scaled backend systems that drive automation at speed? Ready to join a profitable AI startup transforming how industries deploy technology? A profitable, fast-growing AI company is building the “roads” for AI agents — providing the infrastructure that lets businesses deploy and integrate …/LLM components into production systems. Role Breakdown: 50% Backend Engineering: FastAPI, Flask, Node.js, CI/CD 30% Data Engineering: ETL, DBT, Airflow 20% AI/LLM Integration: LangChain, RAG pipelines, orchestration Key Responsibilities: Design and build backend services to support AI agent deployment Develop scalable data pipelines and integration layers Implement AI/LLM-powered features with LangChain and … of Front-End, and adoption engineers Reporting Line: CTO Visa: Cannot sponsor Ideal Profile: 5–8 years’ experience in backend or data engineering (Python) Exposure to AI/LLMs, RAG, or personal AI side projects Experience in startups or scale-ups where you’ve helped build or grow systems Solid education and strong coding fundamentals Comfortable working in a small More ❯
We are looking for Software Engineers to join our partner company! You will play a key role in building next-generation infrastructure and AI systems that accelerate innovation across industries such as aerospace, healthcare, and energy — fields still limited by outdated, manual processes. This is not a traditional engineering role. It’s a chance to join an early … technologies. Design and ship backend systems from day one — scalable architectures, APIs, and AI-powered tools. Build and deploy LLM-based applications using frameworks such as LangChain, semantic search, RAG, and fine-tuning (SFT/RL). Develop backend logic and data pipelines to power intelligent automation and real-time processing. Manage infrastructure: relational and graph databases, containerization, CI/… Backend, or AI Engineer. Proven track record of building and shipping production-ready systems. Hands-on experience with distributed systems, APIs, databases, and cloud infrastructure. Familiarity with LLMs, LangChain, RAG, or fine-tuning techniques (SFT/RL). Strong problem-solving and debugging abilities with a proactive, builder’s mindset. Comfortable in a fast-paced, early-stage environment where you More ❯
We are looking for Software Engineers to join our partner company! You will play a key role in building next-generation infrastructure and AI systems that accelerate innovation across industries such as aerospace, healthcare, and energy — fields still limited by outdated, manual processes. This is not a traditional engineering role. It’s a chance to join an early … technologies. Design and ship backend systems from day one — scalable architectures, APIs, and AI-powered tools. Build and deploy LLM-based applications using frameworks such as LangChain, semantic search, RAG, and fine-tuning (SFT/RL). Develop backend logic and data pipelines to power intelligent automation and real-time processing. Manage infrastructure: relational and graph databases, containerization, CI/… Backend, or AI Engineer. Proven track record of building and shipping production-ready systems. Hands-on experience with distributed systems, APIs, databases, and cloud infrastructure. Familiarity with LLMs, LangChain, RAG, or fine-tuning techniques (SFT/RL). Strong problem-solving and debugging abilities with a proactive, builder’s mindset. Comfortable in a fast-paced, early-stage environment where you More ❯
An exciting opportunity to join a small, high-impact AI team building an internal AI Platform that powers the next generation of intelligent products and services. You’ll work on cutting-edge AI infrastructure, collaborate with experienced platform engineers, and make a real impact on how AI innovation scales across the organization. What you’ll be doing: Building … managing LLMOps tools such as LiteLLM, Langflow, and Langfuse Implementing observability with Prometheus, Grafana, and Loki Managing infrastructure through Terraform, ArgoCD, and GitHub Actions Supporting internal AI applications including RAG, document processing, and internal AI assistants What you’ll need: 2–4 years in Platform or DevOps Engineering (Azure preferred) Strong experience with Kubernetes, Docker, and Terraform Programming or scripting More ❯
Engineer , you’ll take ownership of core backend systems powering an advanced AI platform that integrates production-ready LLMs and agentic systems. You’ll architect scalable backend services, build RAG pipelines, and integrate AI components into distributed systems that enable real-world, production-scale AI deployment . 🔧 What You’ll Do Architect and develop backend systems (Python, FastAPI, Flask, NodeJS … Design and implement RAG pipelines (LangChain, Qdrant, embeddings) Build ETL & CI/CD workflows (Airflow, dbt, MLFlow) Integrate AI/LLM components into backend services Ensure reliability, scalability, and maintainability across systems Collaborate with a small, elite team of engineers and AI researchers 💼 About You 4+ years of backend experience (Python, distributed systems, async APIs) Proven experience integrating AI or More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Harnham
Engineer , you’ll take ownership of core backend systems powering an advanced AI platform that integrates production-ready LLMs and agentic systems. You’ll architect scalable backend services, build RAG pipelines, and integrate AI components into distributed systems that enable real-world, production-scale AI deployment . 🔧 What You’ll Do Architect and develop backend systems (Python, FastAPI, Flask, NodeJS … Design and implement RAG pipelines (LangChain, Qdrant, embeddings) Build ETL & CI/CD workflows (Airflow, dbt, MLFlow) Integrate AI/LLM components into backend services Ensure reliability, scalability, and maintainability across systems Collaborate with a small, elite team of engineers and AI researchers 💼 About You 4+ years of backend experience (Python, distributed systems, async APIs) Proven experience integrating AI or More ❯
Greater London, England, United Kingdom Hybrid/Remote Options
OpenKit
client projects, developing AI applications that address operational challenges for our clients. Your responsibilities will include: Developing full-stack AI applications using PostgreSQL, Supabase, React, TypeScript, and Python Implementing RAG systems for enterprise LLM applications Designing and refining prompt chains and AI agents Building and integrating internal and external APIs to connect AI systems with existing infrastructure Benchmarking AI model … PostgreSQL preferred) and API development Can provide tangible evidence of engagement with AI technologies e.g. through personal projects, or participation in AI communities/hackathons. Desirable: Experience developing custom RAG systems with vector databases or model fine-tuning Familiarity with Supabase/SQL or vector databases Knowledge of prompt engineering techniques (e.g., structured responses, tool use) and understanding of LLM More ❯
not a feature contributor. Fast-moving environment Immediate impact: Your code will run in production and support real users from day one. Technical depth: Multi-agent reasoning, voice-streaming, RAG optimisation and all in one system. Flexible setup: Remote across the EU, with optional co-working in London or Barcelona. What you’ll do Obsessive about latency, you think in … model performance. Design, implement, and productionise multi-agent LLM systems that reason, plan, and coordinate. Develop FastAPI-based microservices optimised for low latency and high reliability. Engineer and evaluate RAG pipelines : hybrid retrieval, re-ranking, grounding, and context validation. Integrate real-time voice interfaces (STT/TTS, WebRTC, LiveKit) into intelligent conversational flows. Instrument and evaluate system performance More ❯
We’re looking for a DevOps Engineer who’s excited about building the infrastructure behind next-generation AI products. You’ll join a small, experienced team developing an internal Kubernetes-based platform that enables AI innovation across the organisation automating everything from deployments to observability, and helping developers build smarter applications with confidence. What you’ll be doing … environments Managing Infrastructure as Code with Terraform and improving GitOps workflows (ArgoCD/GitHub Actions) Building observability and monitoring stacks using Prometheus, Grafana, and Loki Supporting AI workloads (LLMs, RAG, and document processing applications) running on Kubernetes Automating platform operations with Python, Go, and shell scripting Implementing security guardrails, PII compliance tooling, and best practices for production AI systems What More ❯
VP of Engineering — AI Hands-On | LLMs, RAG, Multi-Agent Systems | High-Growth Tech | On-Site A high-growth technology company is searching for an exceptional hands-on VP of Engineering (AI) to lead and build next-generation AI products. This is not a people-manager or advisory role — it’s a deep technical, founder-style position for … the next wave of applied AI. What You’ll Be Building: You’ll architect, build, and ship real AI products including: LLM-driven applications (generation, reasoning, orchestration) RAG pipelines, vector search, knowledge systems Multi-agent/agentic AI workflows High-performance backend systems in Python & Node.js You’ll be expected to write code daily (~80%) , make technical decisions More ❯
We’re looking for a talented AI Engineer to help build the next generation of intelligent, agent-based systems for clinical research. This is an exciting opportunity to shape an LLM-driven platform that uses generative AI to enhance data quality, automate risk detection, and support decision-making in global clinical trials. You’ll design and develop LLM … connect with complex datasets and analytical tools, working with cross-functional experts to bring AI innovation into a real-world, regulated environment. If you’re passionate about agentic AI, RAG/KAG architectures, and building safe, auditable systems that make a tangible impact on healthcare, this role offers the chance to help define what’s next in applied AI. More ❯
user-focused AI services across our business. What You’ll Do Design and implement a comprehensive AI testing and evaluation framework for all AI solutions, including LLM-based tools, RAG systems, and third-party platforms. Define and document quality standards for semantic accuracy, factual consistency, bias, tone, and relevance. Develop reusable testing templates, data sets, and evaluation methods that can … LLM behaviour (accuracy, hallucination, bias, tone, etc.). Familiarity with tools like Trulens, HumanLoop, PromptLayer, or similar; experience designing QA approaches for GenAI environments. Knowledge of modern AI architectures (RAG pipelines, embeddings, API integrations such as OpenAI, Azure OpenAI, Anthropic). Experience designing and implementing structured test regimes in fast-evolving contexts. Excellent communication and facilitation skills, engaging both technical More ❯
City Of London, England, United Kingdom Hybrid/Remote Options
Lorien
user-focused AI services across our business. What You’ll Do Design and implement a comprehensive AI testing and evaluation framework for all AI solutions, including LLM-based tools, RAG systems, and third-party platforms. Define and document quality standards for semantic accuracy, factual consistency, bias, tone, and relevance. Develop reusable testing templates, data sets, and evaluation methods that can … LLM behaviour (accuracy, hallucination, bias, tone, etc.). Familiarity with tools like Trulens, HumanLoop, PromptLayer, or similar; experience designing QA approaches for GenAI environments. Knowledge of modern AI architectures (RAG pipelines, embeddings, API integrations such as OpenAI, Azure OpenAI, Anthropic). Experience designing and implementing structured test regimes in fast-evolving contexts. Excellent communication and facilitation skills, engaging both technical More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Lorien
user-focused AI services across our business. What You'll Do Design and implement a comprehensive AI testing and evaluation framework for all AI solutions, including LLM-based tools, RAG systems, and third-party platforms. Define and document quality standards for semantic accuracy, factual consistency, bias, tone, and relevance. Develop reusable testing templates, data sets, and evaluation methods that can … LLM behaviour (accuracy, hallucination, bias, tone, etc.). Familiarity with tools like Trulens, HumanLoop, PromptLayer, or similar; experience designing QA approaches for GenAI environments. Knowledge of modern AI architectures (RAG pipelines, embeddings, API integrations such as OpenAI, Azure OpenAI, Anthropic). Experience designing and implementing structured test regimes in fast-evolving contexts. Excellent communication and facilitation skills, engaging both technical More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Future Talent Group
🚀 AI/Data Engineer (AI) 📍 Location: London (Hybrid – 1 every other week) 💼 up to £600 a day (Inside IR35) The Role As a Data Engineer , you’ll join our cross-functional Product team of software engineers, ML specialists, domain experts More ❯
🚀 AI/Data Engineer (AI) 📍 Location: London (Hybrid – 1 every other week) 💼 up to £600 a day (Inside IR35) The Role As a Data Engineer , you’ll join our cross-functional Product team of software engineers, ML specialists, domain experts More ❯
LLM tooling and libraries (e.g., Hugging Face Transformers/PEFT, tokenisers, spaCy or similar) Experience shipping NLP systems: prompt engineering, fine-tuning (e.g., LoRA/PEFT), vector search, and RAG-based services Great knowledge of NLP algorithms: tokenisation, embeddings, attention, language modelling, text classification/generation, and information retrieval Desirable Skills 👌 Can speak, or learning to More ❯
LLM tooling and libraries (e.g., Hugging Face Transformers/PEFT, tokenisers, spaCy or similar) Experience shipping NLP systems: prompt engineering, fine-tuning (e.g., LoRA/PEFT), vector search, and RAG-based services Great knowledge of NLP algorithms: tokenisation, embeddings, attention, language modelling, text classification/generation, and information retrieval Desirable Skills 👌 Can speak, or learning to More ❯
translate business needs into scalable architecture on the Microsoft platform. The ideal candidate will be a thought leader, trusted advisor, and hands-on architect capable of shaping next-generation data platforms and driving adoption across stakeholder groups. Lead end-to-end architecture design and implementation for Microsoft Fabric across enterprise data programs. Collaborate with business and technical stakeholders … Lakehouse architecture, Delta/Parquet formats, and data governance tools. Experience integrating Fabric with Azure-native services (e.g., Azure AI Foundry, Purview, Entra ID). Familiarity with Generative AI, RAG-based architecture, and Fabric Notebooks is a plus. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and More ❯