Central London, London, United Kingdom Hybrid / WFH Options
Staffworx Limited
custom LLM integrations). Exposure to AI ethics, data privacy, and compliance regulations. Prior experience in multi-agent systems or autonomous AI workflows. Hands-on experience with vector databases (Pinecone, Weaviate, FAISS) and AI embeddings. Remote WorkingSome remote working CountryUnited Kingdom LocationWC1 Job TypeContract or Permanent Start DateApr-Jul 25 Duration9 months initial or permanent Visa RequirementApplicants must be eligible More ❯
RAG) for augmenting LLMs with domain-specific knowledge. Prompt engineering and fine-tuning for tailoring model behavior to business-specific contexts. Use of embedding stores and vector databases (e.g., Pinecone, Redis, Azure AI Search) to support semantic search and recommendation systems. Building intelligent features like AI-powered chatbots , assistants , and question-answering systems using LLMs and conversational agents. Awareness of More ❯
RAG) for augmenting LLMs with domain-specific knowledge. Prompt engineering and fine-tuning for tailoring model behavior to business-specific contexts. Use of embedding stores and vector databases (e.g., Pinecone, Redis, Azure AI Search) to support semantic search and recommendation systems. Building intelligent features like AI-powered chatbots , assistants , and question-answering systems using LLMs and conversational agents. Awareness of More ❯
RAG) for augmenting LLMs with domain-specific knowledge. Prompt engineering and fine-tuning for tailoring model behavior to business-specific contexts. Use of embedding stores and vector databases (e.g., Pinecone, Redis, Azure AI Search) to support semantic search and recommendation systems. Building intelligent features like AI-powered chatbots , assistants , and question-answering systems using LLMs and conversational agents. Awareness of More ❯
RAG) for augmenting LLMs with domain-specific knowledge. Prompt engineering and fine-tuning for tailoring model behavior to business-specific contexts. Use of embedding stores and vector databases (e.g., Pinecone, Redis, Azure AI Search) to support semantic search and recommendation systems. Building intelligent features like AI-powered chatbots , assistants , and question-answering systems using LLMs and conversational agents. Awareness of More ❯
london (city of london), south east england, united kingdom
Zensar Technologies
RAG) for augmenting LLMs with domain-specific knowledge. Prompt engineering and fine-tuning for tailoring model behavior to business-specific contexts. Use of embedding stores and vector databases (e.g., Pinecone, Redis, Azure AI Search) to support semantic search and recommendation systems. Building intelligent features like AI-powered chatbots , assistants , and question-answering systems using LLMs and conversational agents. Awareness of More ❯
/or LLM-powered applications in production environments. Proficiency in Python and ML libraries such as PyTorch, Hugging Face Transformers , or TensorFlow. Experience with vector search tools (e.g., FAISS, Pinecone, Weaviate) and retrieval frameworks (e.g., LangChain, LlamaIndex). Hands-on experience with fine-tuning and distillation of large language models. Comfortable with cloud platforms (Azure preferred), CI/CD tools More ❯
London, England, United Kingdom Hybrid / WFH Options
Enable International
/or LLM-powered applications in production environments. Proficiency in Python and ML libraries such as PyTorch, Hugging Face Transformers, or TensorFlow. Experience with vector search tools (e.g., FAISS, Pinecone, Weaviate) and retrieval frameworks (e.g., LangChain, LlamaIndex). Hands-on experience with fine-tuning and distillation of large language models. Comfortable with cloud platforms (Azure preferred), CI/CD tools More ❯
in Python, with expertise in using frameworks like Hugging Face Transformers, LangChain, OpenAI APIs, or other LLM orchestration tools. A solid understanding of tokenisation, embedding models, vector databases (e.g., Pinecone, Weaviate, FAISS), and retrieval-augmented generation (RAG) pipelines. Experience designing and evaluating LLM-powered systems such as chatbots, summarisation tools, content generation workflows, or intelligent data extraction pipelines. Deep understanding More ❯
MLOps (model/component dockerization, Kubernetes deployment) in multiple environments (AWS, AZURE, GCP). Operationalization of AI solutions to production. •Relational DB (SQL), Graph DB (Neo4j) and Vector DB (Pinecone, Weviate, Qdrant) •Guide team to debug issues with pipeline failures •Engage with Business/Stakeholders with status update on progress of development and issue fix •Automation, Technology and Process Improvement … MLOps (model/component dockerization, Kubernetes deployment) in multiple environments (AWS, AZURE, GCP). Operationalization of AI solutions to production. •Relational DB (SQL), Graph DB (Neo4j) and Vector DB (Pinecone, Weviate, Qdrant) •Experience designing and implementing ML Systems & pipelines, MLOps practices •Exposure to event driven orchestration, Online Model deployment •Hands on experience in working with client IT/Business teams More ❯
and fine-tune SLMs/LLMs using domain-specific data (e.g., ITSM, security, operations) • Design and optimize Retrieval-Augmented Generation (RAG) pipelines with vector DBs (e.g., FAISS, Chroma, Weaviate, Pinecone) • Develop agent-based architectures using LangGraph, AutoGen, CrewAI, or custom frameworks • Integrate AI agents with enterprise tools (ServiceNow, Jira, SAP, Slack, etc.) • Optimize model performance (quantization, distillation, batching, caching) • Collaborate … and attention mechanisms • Experience with LangChain, Transformers (HuggingFace), or LlamaIndex • Working knowledge of LLM fine-tuning (LoRA, QLoRA, PEFT) and prompt engineering • Hands-on experience with vector databases (FAISS, Pinecone, Weaviate, Chroma) • Cloud experience on Azure, AWS, or GCP (Azure preferred) • Experience with Kubernetes, Docker, and scalable microservice deployments • Experience integrating with REST APIs, webhooks, and enterprise systems (ServiceNow, SAP More ❯
and modern web frameworks Deep experience with AI/ML frameworks (PyTorch, TensorFlow, Transformers, LangChain) Mastery of prompt engineering and fine-tuning Large Language Models Proficient in vector databases (Pinecone, Weaviate, Milvus) and embedding technologies Expert in building RAG (Retrieval-Augmented Generation) systems at scale Strong experience with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS More ❯
and modern web frameworks Deep experience with AI/ML frameworks (PyTorch, TensorFlow, Transformers, LangChain) Mastery of prompt engineering and fine-tuning Large Language Models Proficient in vector databases (Pinecone, Weaviate, Milvus) and embedding technologies Expert in building RAG (Retrieval-Augmented Generation) systems at scale Strong experience with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS More ❯
skills and ability to work in a team environment. Preferred Qualifications: Experience working with large-scale AI applications and personalization engines. Familiarity with production-scale vector databases (e.g., QDrant, Pinecone, Weaviate). Understanding of AI model interpretability and ethical AI considerations. Exposure to real-time AI applications and MLOps workflows. Why Join Us? Work alongside industry experts on cutting-edge More ❯
healthcare data interoperability (FHIR, HL7, CDA). You've built real-time AI applications, including voice AI, speech recognition, or NLP pipelines. You have experience in vector databases (e.g., Pinecone, Weaviate) and retrieval-augmented generation (RAG) architectures. What's in it for you? The opportunity to build and scale AI models in production that directly impact healthcare efficiency. A role More ❯
outcomes across enterprise platforms. Your future duties and responsibilities: Design, develop, and deploy Generative AI solutions for real-world applications. Implement RAG pipelines by integrating vector databases (e.g., FAISS, Pinecone, OpenSearch). Perform LLM fine-tuning and prompt optimization for domain-specific use cases. Build and manage AI workflows on AWS SageMaker, Bedrock, and other cloud-native services. Develop clean More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Capgemini
CI/CD : Experience with continuous integration and deployment tools such as GitLab , GitHub , or Jenkins . Database Management Vector Databases: Experience with and (but not limited to) ChromaDB, Pinecone, PGVector, MongoDB , Qdrant etc. NoSQL: Familiarity with NoSQL databases (e.g., MongoDB preferred). SQL: Experience working with SQL databases like PostgreSQL. Version Control Proficient in Git and version control platforms More ❯
virtual assistants): Requirements: • Strong experience with Python and Al/ML libraries (Langchain, TensorFlow, PyTorch) • Experience with frontend frameworks like React or Angular • Knowledge of vector databases (e.g., FAISS, Pinecone, Weaviate) • Familiarity with LLM integrations (e.g., OpenAl, HuggingFace) • Experience building and consuming REST/gRPC APis • Understanding of prompt engineering and RAG architectures • Familiar with cloud platforms (AWS, GCP, or More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
What you’ll do Design & build backend micro‐services (Python/FastAPI) that power RAG pipelines, user queries, and analytics. Develop retrieval infrastructure : orchestrate embedding generation, vector databases (PGVector, Pinecone, Weaviate), and hybrid search. Implement evaluation framework for search quality and answer accuracy (BLEU/ROUGE, human‐in‐the‐loop, automatic hallucination checks). Deploy & monitor services on GCP (Cloud … ship weekly increments. Champion best practices in testing, secure data handling (NHS DSPT), and GDPR compliance. Tech you’ll use Python, FastAPI, LangChain/LlamaIndex, PostgreSQL + PGVector, Redis, Pinecone/Weaviate, Vertex AI, Cloud Run, Docker, Terraform, Prometheus/Grafana, GitHub Actions What we’re looking for Master’s degree in Computer Science, Software Engineering, or related field; or More ❯
London, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
What you’ll do Design & build backend micro‐services (Python/FastAPI) that power RAG pipelines, user queries, and analytics. Develop retrieval infrastructure : orchestrate embedding generation, vector databases (PGVector, Pinecone, Weaviate), and hybrid search. Implement evaluation framework for search quality and answer accuracy (BLEU/ROUGE, human‐in‐the‐loop, automatic hallucination checks). Deploy & monitor services on GCP (Cloud … ship weekly increments. Champion best practices in testing, secure data handling (NHS DSPT), and GDPR compliance. Tech you’ll use Python • FastAPI • LangChain/LlamaIndex • PostgreSQL + PGVector • Redis • Pinecone/Weaviate • Vertex AI • Cloud Run • Docker • Terraform • Prometheus/Grafana • GitHub Actions What we’re looking for Master’s degree in Computer Science, Software Engineering, or related field; or More ❯
retrieval-augmented generation, prompt management, model orchestration). Work with embeddings, vector stores, and similarity search to enable contextual AI responses. Integrate with vector databases (e.g., FAISS, Weaviate, or Pinecone) to support semantic search and information retrieval. Build scalable APIs and services using FastAPI or similar frameworks. Use tools like MLflow to manage model experimentation, versioning, and deployment. Collaborate closely … in Python, with experience building services using FastAPI or similar frameworks. Working with embeddings for text or document representation and semantic search. Familiarity with vector databases (e.g., FAISS, Weaviate, Pinecone). Understanding of AI infrastructure: versioning, tracking, and deployment with tools like MLflow. Exposure to building production-grade APIs, services, or workflows in an agile, collaborative environment. Awareness of AI More ❯
London, England, United Kingdom Hybrid / WFH Options
Praktiki
What you’ll do Design & build backend micro‐services (Python/FastAPI) that power RAG pipelines, user queries, and analytics. Develop retrieval infrastructure : orchestrate embedding generation, vector databases (PGVector, Pinecone, Weaviate), and hybrid search. Implement evaluation framework for search quality and answer accuracy (BLEU/ROUGE, human‐in‐the‐loop, automatic hallucination checks). Deploy & monitor services on GCP (Cloud … ship weekly increments. Champion best practices in testing, secure data handling (NHS DSPT), and GDPR compliance. Tech you’ll use Python • FastAPI • LangChain/LlamaIndex • PostgreSQL + PGVector • Redis • Pinecone/Weaviate • Vertex AI • Cloud Run • Docker • Terraform • Prometheus/Grafana • GitHub Actions What we’re looking for Master’s degree in Computer Science, Software Engineering, or related field; or More ❯
monitoring. Full-Stack Integration : Develop APIs and integrate ML models into web applications using FastAPI, Flask, React, TypeScript, and Node.js. Vector Databases & Search : Implement embeddings and retrieval mechanisms using Pinecone, Weaviate, FAISS, Milvus, ChromaDB, or OpenSearch. Required skills & experience: 3-5+ years in machine learning and software development Proficient in Python, PyTorch or TensorFlow or Hugging Face Transformers Experience More ❯
monitoring. Full-Stack Integration : Develop APIs and integrate ML models into web applications using FastAPI, Flask, React, TypeScript, and Node.js. Vector Databases & Search : Implement embeddings and retrieval mechanisms using Pinecone, Weaviate, FAISS, Milvus, ChromaDB, or OpenSearch. Required skills & experience: 3-5+ years in machine learning and software development Proficient in Python, PyTorch or TensorFlow or Hugging Face Transformers Experience More ❯