Central London, London, United Kingdom Hybrid / WFH Options
Staffworx Limited
custom LLM integrations). Exposure to AI ethics, data privacy, and compliance regulations. Prior experience in multi-agent systems or autonomous AI workflows. Hands-on experience with vector databases (Pinecone, Weaviate, FAISS) and AI embeddings. Remote WorkingSome remote working CountryUnited Kingdom LocationWC1 Job TypeContract or Permanent Start DateApr-Jul 25 Duration9 months initial or permanent Visa RequirementApplicants must be eligible More ❯
RAG) for augmenting LLMs with domain-specific knowledge. Prompt engineering and fine-tuning for tailoring model behavior to business-specific contexts. Use of embedding stores and vector databases (e.g., Pinecone, Redis, Azure AI Search) to support semantic search and recommendation systems. Building intelligent features like AI-powered chatbots , assistants , and question-answering systems using LLMs and conversational agents. Awareness of More ❯
RAG) for augmenting LLMs with domain-specific knowledge. Prompt engineering and fine-tuning for tailoring model behavior to business-specific contexts. Use of embedding stores and vector databases (e.g., Pinecone, Redis, Azure AI Search) to support semantic search and recommendation systems. Building intelligent features like AI-powered chatbots , assistants , and question-answering systems using LLMs and conversational agents. Awareness of More ❯
RAG) for augmenting LLMs with domain-specific knowledge. Prompt engineering and fine-tuning for tailoring model behavior to business-specific contexts. Use of embedding stores and vector databases (e.g., Pinecone, Redis, Azure AI Search) to support semantic search and recommendation systems. Building intelligent features like AI-powered chatbots , assistants , and question-answering systems using LLMs and conversational agents. Awareness of More ❯
/or LLM-powered applications in production environments. Proficiency in Python and ML libraries such as PyTorch, Hugging Face Transformers , or TensorFlow. Experience with vector search tools (e.g., FAISS, Pinecone, Weaviate) and retrieval frameworks (e.g., LangChain, LlamaIndex). Hands-on experience with fine-tuning and distillation of large language models. Comfortable with cloud platforms (Azure preferred), CI/CD tools More ❯
Express, Next.js Integrate ML models and embeddings into production pipelines using AWS SageMaker , Bedrock or OpenAI APIs Build support systems for autonomous agents including memory storage, vector search (e.g., Pinecone, Weaviate) and tool registries Enforce system-level requirements for security, compliance, observability and CI/CD Drive PoCs and reference architectures for multi-agent coordination , intelligent routing and goal-directed … similar Experience with secure cloud deployments and production ML model integration Bonus Skills Applied work with multi-agent systems , tool orchestration, or autonomous decision-making Experience with vector databases (Pinecone, Weaviate, FAISS) and embedding pipelines Knowledge of AI chatbot frameworks (Rasa, BotPress, Dialogflow) or custom LLM-based UIs Awareness of AI governance , model auditing, and data privacy regulation (GDPR, DPA More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Staffworx
Express, Next.js Integrate ML models and embeddings into production pipelines using AWS SageMaker , Bedrock or OpenAI APIs Build support systems for autonomous agents including memory storage, vector search (e.g., Pinecone, Weaviate) and tool registries Enforce system-level requirements for security, compliance, observability and CI/CD Drive PoCs and reference architectures for multi-agent coordination , intelligent routing and goal-directed … similar Experience with secure cloud deployments and production ML model integration Bonus Skills Applied work with multi-agent systems , tool orchestration, or autonomous decision-making Experience with vector databases (Pinecone, Weaviate, FAISS) and embedding pipelines Knowledge of AI chatbot frameworks (Rasa, BotPress, Dialogflow) or custom LLM-based UIs Awareness of AI governance , model auditing, and data privacy regulation (GDPR, DPA More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Staffworx
Express, Next.js Integrate ML models and embeddings into production pipelines using AWS SageMaker , Bedrock or OpenAI APIs Build support systems for autonomous agents including memory storage, vector search (e.g., Pinecone, Weaviate) and tool registries Enforce system-level requirements for security, compliance, observability and CI/CD Drive PoCs and reference architectures for multi-agent coordination , intelligent routing and goal-directed … similar Experience with secure cloud deployments and production ML model integration Bonus Skills Applied work with multi-agent systems , tool orchestration, or autonomous decision-making Experience with vector databases (Pinecone, Weaviate, FAISS) and embedding pipelines Knowledge of AI chatbot frameworks (Rasa, BotPress, Dialogflow) or custom LLM-based UIs Awareness of AI governance , model auditing, and data privacy regulation (GDPR, DPA More ❯
and fine-tune SLMs/LLMs using domain-specific data (e.g., ITSM, security, operations) • Design and optimize Retrieval-Augmented Generation (RAG) pipelines with vector DBs (e.g., FAISS, Chroma, Weaviate, Pinecone) • Develop agent-based architectures using LangGraph, AutoGen, CrewAI, or custom frameworks • Integrate AI agents with enterprise tools (ServiceNow, Jira, SAP, Slack, etc.) • Optimize model performance (quantization, distillation, batching, caching) • Collaborate … and attention mechanisms • Experience with LangChain, Transformers (HuggingFace), or LlamaIndex • Working knowledge of LLM fine-tuning (LoRA, QLoRA, PEFT) and prompt engineering • Hands-on experience with vector databases (FAISS, Pinecone, Weaviate, Chroma) • Cloud experience on Azure, AWS, or GCP (Azure preferred) • Experience with Kubernetes, Docker, and scalable microservice deployments • Experience integrating with REST APIs, webhooks, and enterprise systems (ServiceNow, SAP More ❯
in Python, with expertise in using frameworks like Hugging Face Transformers, LangChain, OpenAI APIs, or other LLM orchestration tools. A solid understanding of tokenization, embedding models, vector databases (e.g., Pinecone, Weaviate, FAISS), and retrieval-augmented generation (RAG) pipelines. Experience designing and evaluating LLM-powered systems such as chatbots, summarization tools, content generation workflows, or intelligent data extraction pipelines. Deep understanding More ❯
in Python, with expertise in using frameworks like Hugging Face Transformers, LangChain, OpenAI APIs, or other LLM orchestration tools. A solid understanding of tokenization, embedding models, vector databases (e.g., Pinecone, Weaviate, FAISS), and retrieval-augmented generation (RAG) pipelines. Experience designing and evaluating LLM-powered systems such as chatbots, summarization tools, content generation workflows, or intelligent data extraction pipelines. Deep understanding More ❯
controllers. Develop and maintain AI microservices using Docker, Kubernetes, and FastAPI, ensuring smooth model serving and error handling; Vector Search & Retrieval: Implement retrieval-augmented workflows: ingest documents, index embeddings (Pinecone, FAISS, Weaviate), and build similarity search features. Rapid Prototyping: Create interactive AI demos and proofs-of-concept with Streamlit, Gradio, or Next.js for stakeholder feedback; MLOps & Deployment: Implement CI/… experience fine-tuning LLMs via OpenAI, HuggingFace or similar APIs; Strong proficiency in Python; Deep expertise in prompt engineering and tooling like LangChain or LlamaIndex; Proficiency with vector databases (Pinecone, FAISS, Weaviate) and document embedding pipelines; Proven rapid-prototyping skills using Streamlit or equivalent frameworks for UI demos. Familiarity with containerization (Docker) and at least one orchestration/deployment platform More ❯
controllers. Develop and maintain AI microservices using Docker, Kubernetes, and FastAPI, ensuring smooth model serving and error handling; Vector Search & Retrieval: Implement retrieval-augmented workflows: ingest documents, index embeddings (Pinecone, FAISS, Weaviate), and build similarity search features. Rapid Prototyping: Create interactive AI demos and proofs-of-concept with Streamlit, Gradio, or Next.js for stakeholder feedback; MLOps & Deployment: Implement CI/… experience fine-tuning LLMs via OpenAI, HuggingFace or similar APIs; Strong proficiency in Python; Deep expertise in prompt engineering and tooling like LangChain or LlamaIndex; Proficiency with vector databases (Pinecone, FAISS, Weaviate) and document embedding pipelines; Proven rapid-prototyping skills using Streamlit or equivalent frameworks for UI demos. Familiarity with containerization (Docker) and at least one orchestration/deployment platform More ❯
controllers. Develop and maintain AI microservices using Docker, Kubernetes, and FastAPI, ensuring smooth model serving and error handling; Vector Search & Retrieval: Implement retrieval-augmented workflows: ingest documents, index embeddings (Pinecone, FAISS, Weaviate), and build similarity search features. Rapid Prototyping: Create interactive AI demos and proofs-of-concept with Streamlit, Gradio, or Next.js for stakeholder feedback; MLOps & Deployment: Implement CI/… experience fine-tuning LLMs via OpenAI, HuggingFace or similar APIs; Strong proficiency in Python; Deep expertise in prompt engineering and tooling like LangChain or LlamaIndex; Proficiency with vector databases (Pinecone, FAISS, Weaviate) and document embedding pipelines; Proven rapid-prototyping skills using Streamlit or equivalent frameworks for UI demos. Familiarity with containerization (Docker) and at least one orchestration/deployment platform More ❯
monitoring. Full-Stack Integration : Develop APIs and integrate ML models into web applications using FastAPI, Flask, React, TypeScript, and Node.js. Vector Databases & Search : Implement embeddings and retrieval mechanisms using Pinecone, Weaviate, FAISS, Milvus, ChromaDB, or OpenSearch. Required skills & experience: 3-5+ years in machine learning and software development Proficient in Python, PyTorch or TensorFlow or Hugging Face Transformers Experience More ❯
monitoring. Full-Stack Integration : Develop APIs and integrate ML models into web applications using FastAPI, Flask, React, TypeScript, and Node.js. Vector Databases & Search : Implement embeddings and retrieval mechanisms using Pinecone, Weaviate, FAISS, Milvus, ChromaDB, or OpenSearch. Required skills & experience: 3-5+ years in machine learning and software development Proficient in Python, PyTorch or TensorFlow or Hugging Face Transformers Experience More ❯
monitoring. Full-Stack Integration : Develop APIs and integrate ML models into web applications using FastAPI, Flask, React, TypeScript, and Node.js. Vector Databases & Search : Implement embeddings and retrieval mechanisms using Pinecone, Weaviate, FAISS, Milvus, ChromaDB, or OpenSearch. Required skills & experience: 3-5+ years in machine learning and software development Proficient in Python, PyTorch or TensorFlow or Hugging Face Transformers Experience More ❯
City of London, London, Finsbury Square, United Kingdom
The Portfolio Group
monitoring. Full-Stack Integration : Develop APIs and integrate ML models into web applications using FastAPI, Flask, React, TypeScript, and Node.js. Vector Databases & Search : Implement embeddings and retrieval mechanisms using Pinecone, Weaviate, FAISS, Milvus, ChromaDB, or OpenSearch. Required skills & experience: 3-5+ years in machine learning and software development Proficient in Python, PyTorch or TensorFlow or Hugging Face Transformers Experience More ❯
day. What You’ll Own Architect and develop backend microservices (Python/FastAPI) that power our RAG pipelines and analytics Build scalable infrastructure for retrieval and vector search (PGVector, Pinecone, Weaviate) Design evaluation frameworks to improve search accuracy and reduce hallucinations Deploy and manage services on GCP (Vertex AI, Cloud Run, BigQuery) using Terraform and CI/CD best practices … teams to iterate fast and deliver impact Embed security, GDPR compliance, and testing best practices into the core of our stack Tech Stack Python • FastAPI • PostgreSQL + PGVector • Redis • Pinecone/Weaviate • Vertex AI • Cloud Run • Docker • Terraform • GitHub Actions • LangChain/LlamaIndex What We’re Looking For 5+ years building production-grade backend systems (preferably in Python) Strong background More ❯
day. What You’ll Own Architect and develop backend microservices (Python/FastAPI) that power our RAG pipelines and analytics Build scalable infrastructure for retrieval and vector search (PGVector, Pinecone, Weaviate) Design evaluation frameworks to improve search accuracy and reduce hallucinations Deploy and manage services on GCP (Vertex AI, Cloud Run, BigQuery) using Terraform and CI/CD best practices … teams to iterate fast and deliver impact Embed security, GDPR compliance, and testing best practices into the core of our stack Tech Stack Python • FastAPI • PostgreSQL + PGVector • Redis • Pinecone/Weaviate • Vertex AI • Cloud Run • Docker • Terraform • GitHub Actions • LangChain/LlamaIndex What We’re Looking For 5+ years building production-grade backend systems (preferably in Python) Strong background More ❯
day. What You’ll Own Architect and develop backend microservices (Python/FastAPI) that power our RAG pipelines and analytics Build scalable infrastructure for retrieval and vector search (PGVector, Pinecone, Weaviate) Design evaluation frameworks to improve search accuracy and reduce hallucinations Deploy and manage services on GCP (Vertex AI, Cloud Run, BigQuery) using Terraform and CI/CD best practices … teams to iterate fast and deliver impact Embed security, GDPR compliance, and testing best practices into the core of our stack Tech Stack Python • FastAPI • PostgreSQL + PGVector • Redis • Pinecone/Weaviate • Vertex AI • Cloud Run • Docker • Terraform • GitHub Actions • LangChain/LlamaIndex What We’re Looking For 5+ years building production-grade backend systems (preferably in Python) Strong background More ❯
machine learning fundamentals , including supervised/unsupervised learning. Experience with cloud environments – ideally Azure , but AWS or GCP also considered. Familiarity with LLMs , prompt engineering , and vector databases (e.g. Pinecone, FAISS). Practical experience building production-ready AI applications. Ability to work on-site in Newcastle in a collaborative, agile environment. A curious mindset, eagerness to learn, and a genuine More ❯
production Hands-on experience with frameworks like LangChain, LangGraph, or custom-built agent orchestration setups Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, etc.), embedding stores, retrieval pipelines (e.g. Weaviate, Pinecone), and eval tooling Comfort building and testing AI workflows that interact with external APIs, file systems, simulations, and toolchains Bonus: interest or experience in robotics, mechanical/aerospace workflows, or More ❯
production Hands-on experience with frameworks like LangChain, LangGraph, or custom-built agent orchestration setups Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, etc.), embedding stores, retrieval pipelines (e.g. Weaviate, Pinecone), and eval tooling Comfort building and testing AI workflows that interact with external APIs, file systems, simulations, and toolchains Bonus: interest or experience in robotics, mechanical/aerospace workflows, or More ❯