platforms such as AzureML, Google Cloud, or AWS. Possess hands-on experience in using and developing LLM-powered applications, including retrieval-augmentedgeneration (RAG), prompt engineering, and fine-tuning. Produce high-quality, coherent documentation to support their work, even though formal research output is not expected. Prior cybersecurity or business domain expertise is not More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
Intellect Group
grade systems. 🔍 What You’ll Be Working On Assist in the fine-tuning and evaluation of domain-specific LLMs , applying retrieval-augmentedgeneration (RAG) and prompt engineering techniques. Contribute to the development of multi-agent systems using frameworks such as AutoGen , LangGraph , LangChain , or CrewAI . Support the integration of AI safety techniques into More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Anson McCade
varied use cases. Build agentic workflows and reasoning pipelines using frameworks such as LangChain, LangGraph, CrewAI, Autogen, and LangFlow. Implement retrieval-augmentedgeneration (RAG) pipelines using vector databases like Pinecone, FAISS, Chroma, or PostgreSQL. Fine-tune prompts to optimise performance, reliability, and alignment. Design and implement memory modules for short-term and long-term … cloud AI tools, observability platforms, and performance optimisation. This is an opportunity to work at the forefront of AI innovation, where your work will directly shape how next-generation systems interact, reason, and assist. More ❯
varied use cases. Build agentic workflows and reasoning pipelines using frameworks such as LangChain, LangGraph, CrewAI, Autogen, and LangFlow. Implement retrieval-augmentedgeneration (RAG) pipelines using vector databases like Pinecone, FAISS, Chroma, or PostgreSQL. Fine-tune prompts to optimise performance, reliability, and alignment. Design and implement memory modules for short-term and long-term … cloud AI tools, observability platforms, and performance optimisation. This is an opportunity to work at the forefront of AI innovation, where your work will directly shape how next-generation systems interact, reason, and assist. More ❯
rapid iteration, prompt engineering, and practical application. You'll fine-tune and optimize foundation models, craft sophisticated multi-agent systems, and invent novel solutions to power the next generation of voice intelligence. What You'll Do Integrate AI solutions into existing products and workflows Collaborate with cross-functional teams to understand business requirements and translate them into technical … AWS, Google Cloud, or Azure Knowledge of Kubernetes and containerization technologies Experience with data science and ML engineering Familiarity with retrieval-augmentedgeneration (RAG) The requirements listed in the job descriptions are guidelines. You don't have to satisfy every requirement or meet every qualification listed. If your skills are transferable we would still More ❯
automate complex legal workflows and enhance user experiences. Advanced Technology Integration: Collaborate on projects that leverage emerging technologies - such as Retrieval-AugmentedGeneration (RAG) and Knowledge Graphs - to enhance our core product and explore new use cases. Cross-Functional Collaboration: Work closely with cross-functional teams to integrate advanced ML models and NLP solutions More ❯
with enterprise partners. No two weeks will look the same. Fine-tune and privately deploy LLMs - with a focus on Retrieval-AugmentedGeneration (RAG) pipelines Build and scale computer vision systems - from object detection to image segmentation Apply NLP to real-world business problems - summarisation, entity recognition, information extraction, and more Train and deploy More ❯
infrastructure, vector databases, or search systems. Experience building ML-powered products in production. Knowledge of large language models (LLMs) and retrieval-augmentedgeneration (RAG). Public speaking or published technical content (talks, blog posts, tutorials). Familiarity with the Qdrant ecosystem or similar technologies. Benefits Competitive compensation and equity options. Flexible remote work environment. More ❯
As our GTM Manager - London you will focus on the identification and generation of new business opportunities across UK. You will contribute to BRYTER's growth directly by aligning and reinforcing the value of the BRYTER product suite to the customer's overall business plan and strategic objectives and decision criteria. In this role you will be an … Technology: You'll be at the forefront of tech, working with advanced AI models, including large language models (LLMs) and Retrieval-AugmentedGeneration (RAG) techniques, gaining hands-on experience with the latest innovations. High-impact role : Your contributions will directly shape BRYTER's growth and success. Collaborative and innovative team : Join a company with More ❯
expert, driving the adoption and implementation of AI solutions across Morningstar's business units. Requirements: Minimum 3 years of hands-on experience implementing AI/machine learning solutions, including RAG , NLP, AI Agents , and transformer models, in commercial applications. Advanced degree (Master's/PhD preferred) in a quantitative, AI or computational field such as Data Analytics, Computer Science, Machine More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
Experience working in cloud environments (AWS, GCP, or Azure) Ability to work independently and communicate effectively in a remote team Bonus Points Experience with Hugging Face Transformers , LangChain , or RAG pipelines Knowledge of MLOps tools (e.g. MLflow, Weights & Biases, Docker, Kubernetes) Exposure to data engineering or DevOps practices Contributions to open-source AI projects or research publications What We Offer More ❯
productionize generative AI models. Develop scalable GenAI pipelines that generate high-quality content, from product descriptions, reviews, titles, and other product content. Design and evaluate prompt tuning strategies and RAG systems to ensure factual and engaging outputs. Fine-tune foundation models and develop domain-specific adapters using techniques like LoRA, PEFT, and instruction tuning. Define best practices for model monitoring More ❯
productionize generative AI models. Develop scalable GenAI pipelines that generate high-quality content, from product descriptions, reviews, titles, and other product content. Design and evaluate prompt tuning strategies and RAG systems to ensure factual and engaging outputs. Fine-tune foundation models and develop domain-specific adapters using techniques like LoRA, PEFT, and instruction tuning. Define best practices for model monitoring More ❯
are: Experience delivering Large Language Model projects with customers, including LLM API integration, up-to-speed knowledge of foundation models, SFT (Supervised Fine-Tuning), prompt engineering, RAG (Retrieval-augmentedgeneration) and/or measuring AI accuracy. Two years + experience in solutions architecture or integrating multiple applications/data streams, or ML development within More ❯
to deployment and maintenance Familiarity with implementing solutions leveraging Large Language Models, as well as a deep understanding of how to implement solutions using RetrievalAugmentedGeneration (RAG) architecture Experience with graph machine learning (i.e. graph neural networks, graph data science) and practical applications thereof This is complimented by your experience working with Knowledge More ❯
and an understanding of how they facilitate agentic AI development. Strong practical knowledge of prompt engineering techniques and advanced LLM concepts, such as CoT reasoning, prompt chaining, iterative refinement, RAG techniques, MCP, multi-agent architectures, and a clear understanding of when each should be applied. Proven experience with Cloud technologies on GCP or AWS (GCP preferred) including serverless architectures and More ❯
a day In order to be considered for this role you will need to have - Python, SQL, Unix, Cloud (AWS preferred) LLM applications: LangChain, Pydantic-AI (similar frameworks), RAG systems, MCP servers Front end: some experience with Typescript, REACT Application servers, web servers, database servers, docker Some context: frontend (TS/React) and a backend (Python/FastAPI) leveraging a More ❯
DevOps : Solid grasp of microservices, Docker, Kubernetes, and CI/CD pipelines (GitHub Actions, Jenkins) AI/ML Foundations : Familiarity (or eagerness to learn) LLM libraries, vector stores, and RAG paradigms Sector specific knowledge: Experience with financial data systems or forecasting models Communication & Collaboration : Skilled at articulating technical concepts, driving consensus, and working cross-functionally Working at Allica Bank At More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
working with a small, agile team focused on deploying open-source models into production environments.?? Key Responsibilities Fine-tune and evaluate LLMs for domain-specific tasks Build and optimise RAG pipelines using vector databases Develop prompt engineering strategies and orchestration flows Integrate models into backend services via APIs Implement evaluation frameworks for response quality and reliability Essential Skills Strong Python More ❯
Capabilities Full-Stack Engineering Experience - Including infrastructure-as-code for DevOps pipelines, and containerization using tools like Docker and Kubernetes. GenAI Application Development - Experience in building GenAI applications with RAG, prompt/context engineering, embeddings, chat completions, and calling LLM APIs. Agentic AI Expertise - Strong understanding of agentic AI architecture, including LLMs, MCP, agent hierarchies, orchestration, evaluations, memory and trace More ❯
and Infrastructure as Code (IaC) engineering, gained through hands-on development experience. Key areas of proficiency include: Programming Languages: Python, Java and Go. AI/ML: prompt engineering, LLMs, RAG, Semantic Search, Vector Databases, etc Behavior-Driven Development (BDD) Testing: Cucumber, JBehave, Pytest-BDD, etc. CI/CD (Continuous Integration/Continuous Delivery) pipelines : Harness, Tekton, Jenkins, etc. Chaos Engineering More ❯
influence cross-functional stakeholders. Bonus Qualifications: Experience working in regulated or compliance-heavy industries (e.g., legal, finance, healthcare). Familiarity with GenAI technologies (e.g., OpenAI, vector databases, prompt engineering, RAG pipelines). Experience scaling and leading global remote engineering teams. This is a rare opportunity to define and lead the technology behind the next wave of innovation in legal tech. More ❯
doing Design and maintain pipelines that support LLM-based features - including metadata processing, semantic enrichment, and structured context retrieval. Design infrastructure for scalable and performant RAG systems (retrieval-augmentedgeneration), including support for Graph RAG where adopted. Experience in processing large volumes and varieties of structured text data. Experience with legal documents would be … advanced features, not just move bytes around. You've worked with vector stores and embedding pipelines. You are excited by newer patterns like hybrid and graph-based retrieval (Graph RAG). You're comfortable owning your systems in production and instrumenting them for observability. You value collaboration and can flex across infrastructure, science, and product conversations. Qualifications 5+ … services; experience with Amazon Bedrock or Claude integration is a plus. Experience with vector databases and embedding pipelines. Familiarity with or interest in graph-based data models and Graph RAG architectures is a strong plus. Working knowledge of Java and TypeScript environments, especially for integration and debugging. Experience using Jira to coordinate delivery. Working for Opus 2 Opus 2 is More ❯
not limited to) Python, Go, TypeScript, React, Kubernetes, Mongo, and Generative AI features requiring adaptability and a willingness to learn. Responsibilities: Lead applied GenAI research, focusing on prompt engineering, RAG, and agentic workflows Architect and deliver end-to-end LLM solutions - from experimentation to production Design evaluation frameworks to benchmark LLM performance, safety, and behaviour Build integration layers connecting LLMs More ❯
teams to quickly prototype and deliver innovative solutions Building complex agentic systems that utilize LLMs Developing scalable distributed information retrieval systems, such as search engines, knowledge graphs, RAG, indexing, ranking, query understanding, and distributed data processing The expected salary range for this position is: Annual Salary: £250,000 - £340,000 GBP Logistics Education requirements: We require at least More ❯