Gen AI Engineer

We're hiring a Gen AI Engineer

If you've built LLM systems that actually work in production — not prototypes, not demos — this is worth reading.

What our client does

They're an AI company operating at the intersection of computer vision and large language models, building intelligent workflows for industries where work happens in the field: utilities, telecoms, energy, retail. Their platform processes real-world operational data at scale, helping global enterprise clients make faster, safer and more accurate decisions about their assets and people.

They own a dataset in this space that nobody else has. That matters when you're building AI that actually needs to work.

50+ models in production. Real clients. Real scale.

The role

You'll own LLM applications end to end. Architecture, prompt engineering, RAG pipelines, eval tooling, guardrails, all of it. No handoffs, no red tape. You build it, you own it.

Stack: LangChain / LangGraph, AWS Bedrock, Pinecone / FAISS / Weaviate, DeepEval / Langfuse, Docker, Python. Multimodal capability essential.

What we're looking for

  • 2+ years building and deploying production LLM systems
  • Hands-on with RAG, vector databases and retrieval optimisation
  • Eval tooling experience: DeepEval, Langfuse, Ragas or equivalent
  • Strong Python and LLM orchestration frameworks
  • Multimodal LLM experience (text + image/video)
  • Can own a model autonomously without hand-holding
  • Why join

    Small senior team. Bi-weekly shipping cycles. Remote-first with optional London office and monthly meetups. Equity, healthcare allowance, pension.

    The problems are hard. The data is unique. The ownership is real.

    Job Details

    Company
    Wave Group
    Location
    England, United Kingdom
    Hybrid / Remote Options
    Posted