SageMaker, Bedrock, Lambda, S3, ECS, EKS Full-Stack Integration: Build APIs (FastAPI, Flask) and integrate with React, TypeScript, Node.js Vector Search: Use FAISS, Weaviate, Pinecone, ChromaDB, OpenSearch Required skills & experience: 3–5+ years of experience in ML engineering and software development Deep Python proficiency, with PyTorch, TensorFlow or Hugging Face More ❯
use cases and ML, AI opportunities. Experience with containerisation technologies ( Docker, Kubernetes ) for scalable data solutions. Experience with vector databases and graph databases (e.g., Pinecone, Neo4j, AWS Neptune ). Understanding of data mesh-fabric approaches and modern data architecture patterns . Familiarity with AI/ML workflows and their data More ❯
City of London, Greater London, UK Hybrid / WFH Options
twentyAI
Develop and optimise RAG pipelines using LangChain, LlamaIndex, or Haystack. · Build ingestion workflows (OCR, chunking, embedding, semantic search) and integrate with vector databases (FAISS, Pinecone, Qdrant). · Ensure seamless integration of GenAI services into business workflows, prioritising security, scalability, and compliance. · Collaborate with cross-functional teams (data scientists, architects, engineers More ❯
LLM orchestration and prompt engineering frameworks such as LangChain or LangGraph, plus designing retrieval-augmented generation (RAG) pipelines. Familiarity with vector databases like Qdrant, Pinecone, or Redis for low-latency AI retrieval. Experience deploying, monitoring, and scaling AI workloads on cloud platforms such as AWS, GCP, or BigQuery. Bonus points More ❯
for future modular agent deployment Stack AI : Claude API (Anthropic), Exa (search/enrichment) Workflow Orchestration : n8n (webhooks, branching, logic) Memory & DB : PostgreSQL, Redis, Pinecone Front-End : React, Tailwind Infra : Replit, Make.com (light no-code layer), AWS/GCP optional Requirements: Required 7+ years in full-stack development (strong back More ❯