Artificial Intelligence Engineer
AI Platform Engineer
🔥Build the systems that take AI from idea through to production and real-world impact at one of the UK’s fastest-growing fintechs🔥
📍 London (Hybrid)
đź’° Up to ÂŁ125k + Benefits
The Mission
This is a production engineering role not a research or experimentation position.
You’ll be helping build the core AI platform that allows teams to rapidly design, test, and safely deploy AI solutions at scale. The focus is on robust systems, real users, and measurable business outcomes not isolated prototypes.
If you enjoy combining cloud infrastructure, distributed systems, and modern AI tooling to create platforms that power entire organisations, this role will feel like home.
Why This Opportunity Is Special
• Production-grade AI systems built for reliability, scale, and real-world traffic.
• Platform ownership develop the APIs, tooling, and infrastructure other teams depend on.
• Full lifecycle impact from early experimentation to enterprise deployment.
• Modern AI stack LLMs, agent frameworks, RAG pipelines in production.
• Hard engineering problems latency, scaling, concurrency, observability, resilience.
What You’ll Be Working On
• Designing and building the AI platform powering LLMs, agents, and intelligent workflows.
• Creating APIs, SDKs, and internal developer tools to simplify AI adoption.
• Orchestrating multi-step AI pipelines using tools like LangGraph, CrewAI, and similar frameworks
• Building and scaling retrieval systems (RAG, vector search, knowledge layers).
• Implementing observability, logging, and safety guardrails for production AI systems.
• Managing infrastructure with Kubernetes, Terraform, and modern CI/CD practices.
• Optimising performance across latency, throughput, and system load.
• Partnering with product, security, and engineering teams to scale AI safely.
What We’re Looking For
• Strong software or platform engineering background (Python preferred; Go/Java also great).
• Experience with cloud-native, distributed systems (GCP a plus).
• Hands-on Kubernetes experience plus exposure to event-driven architectures or service mesh.
• Familiarity with LLM orchestration tools (LangChain, LangGraph, CrewAI, etc.).
• Experience building or working with RAG pipelines and vector databases.
• Strong grasp of CI/CD, infrastructure-as-code (Terraform, Helm, etc.).
• Understanding of production observability and system reliability practices.
• Awareness of AI safety, governance, and evaluation approaches.
What Success Looks Like
You’ve moved beyond experimentation and understand what it takes to run AI in production. You build systems that perform reliably under real user demand, design architectures that scale and remain resilient even when things fail, and continuously improve deployed systems through monitoring, feedback loops, and real-world usage insights.