Stay up-to-date with new technologies and best practices in data engineering, advancements in generative AI, transformer architectures, and retrieval-augmentedgeneration (RAG) techniques. Ensure the data security standards are met, in conjunction with the Information Security team Manage AI/ML projects and mentor junior team members Experience: Extensive experience in data More ❯
Haywards Heath, Sussex, United Kingdom Hybrid / WFH Options
First Central Services
AI expert ready to take your skills to the next level? Do words like Azure OpenAI, Cognitive Services, prompt engineering, Retrieval-AugmentedGeneration (RAG) architectures, vector stores, and API integrations make you light up inside? If so, we want to hear from you! At 1st Central , we're on an exciting journey with AI … develop AI and Generative AI solutions using services like Azure OpenAI and Azure Cognitive Services Implement prompt engineering techniques and Retrieval-AugmentedGeneration (RAG) architectures. Ensure scalability, security, auditability, and efficiency of AI solutions through detailed system design and development practices. Deploy and manage AI solutions via CI/CD pipelines in Azure DevOps … deploying, and managing production-grade AI and Generative AI systems. Extensive experience with Cloud-based AI and Cognitive Services, and Retrieval-AugmentedGeneration (RAG) architectures. Deep expertise in API integration, preferably within the Azure ecosystem. Experience with Infrastructure as Code (IaC) across development, testing, and production environments. Solid understanding of Azure networking principles, security More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
First Central Services
AI expert ready to take your skills to the next level? Do words like Azure OpenAI, Cognitive Services, prompt engineering, Retrieval-AugmentedGeneration (RAG) architectures, vector stores, and API integrations make you light up inside? If so, we want to hear from you! At 1st Central , we're on an exciting journey with AI … develop AI and Generative AI solutions using services like Azure OpenAI and Azure Cognitive Services Implement prompt engineering techniques and Retrieval-AugmentedGeneration (RAG) architectures. Ensure scalability, security, auditability, and efficiency of AI solutions through detailed system design and development practices. Deploy and manage AI solutions via CI/CD pipelines in Azure DevOps … deploying, and managing production-grade AI and Generative AI systems. Extensive experience with Cloud-based AI and Cognitive Services, and Retrieval-AugmentedGeneration (RAG) architectures. Deep expertise in API integration, preferably within the Azure ecosystem. Experience with Infrastructure as Code (IaC) across development, testing, and production environments. Solid understanding of Azure networking principles, security More ❯
Sheffield, Yorkshire, United Kingdom Hybrid / WFH Options
Educations Media Group
competencies, success in this role hinges on specific, hands-on experience in the following areas: Generative AI Applications: Demonstrable experience designing, building, and deploying applications leveraging Generative AI techniques. RAG Pipelines: Deep understanding and practical experience in developing and optimising Retrieval-AugmentedGeneration (RAG) pipelines. GenAI Frameworks: Hands-on experience with key frameworks such … APIs (OpenAI, Gemini) and ideally experience handling open-source models. Knowledge Graphs for AI: Experience utilising Knowledge Graphs (Neo4j preferred) as part of AI architectures, as part of modern RAG systems. Highly Desirable: Experience building B2C applications (e.g., chatbots), exploring Agentic AI patterns. What can we offer you? Freedom to help, plan and lead AI/ML architecture decisions in More ❯
libraries such as TensorFlow or PyTorch. Experience working with LLMs (Gemini), prompt engineering, and reinforcement learning from human feedback (RLHF). Experience with LangChain for building LLM applications with RAG pipelines and agent workflows. Practical understanding of vector search, embeddings, and retrieval-augmentedgeneration (RAG). Experience building and deploying machine learning models into More ❯
architecture of LLMs. Foundational knowledge of diffusion models for image generation. Can display and present completed project/s using LLMs with a focus on any of the following: RAG, Agentic-RAG, fine-tuning Some experience or familiarity with deploying applications in the Cloud using services such as AWS or Azure. Proven track record in securing web/API applications. More ❯
architecture of LLMs. Foundational knowledge of diffusion models for image generation. Can display and present completed project/s using LLMs with a focus on any of the following: RAG, Agentic-RAG, fine-tuning Some experience or familiarity with deploying applications in the Cloud using services such as AWS or Azure. Proven track record in securing web/API applications. More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Anson McCade
varied use cases. Build agentic workflows and reasoning pipelines using frameworks such as LangChain, LangGraph, CrewAI, Autogen, and LangFlow. Implement retrieval-augmentedgeneration (RAG) pipelines using vector databases like Pinecone, FAISS, Chroma, or PostgreSQL. Fine-tune prompts to optimise performance, reliability, and alignment. Design and implement memory modules for short-term and long-term … cloud AI tools, observability platforms, and performance optimisation. This is an opportunity to work at the forefront of AI innovation, where your work will directly shape how next-generation systems interact, reason, and assist. More ❯
of AI engineers. Represent AI in senior product, engineering, and vendor forums. Generative AI Delivery Lead design, prototyping, and deployment of GenAI use cases (e.g. co-pilots, AI agents, RAG systems). Establish scalable LLMOps practices including model evaluation, governance, and lifecycle automation. Maintain awareness of emerging models and integration strategies. Machine Learning Engineering: Support the ML Engineer in model … Required Qualifications: Proven experience building and deploying GenAI applications in production. Strong hands-on knowledge of LLMs, prompt engineering, and retrieval-augmentedgeneration (RAG). Practical experience with traditional ML, including data pipelines and MLOps workflows. Working knowledge of statistical modelling and experimentation. Proficiency in Python and at least one additional general-purpose language. More ❯
functional and fast-paced environment. Experimentation Mindset: Passion for testing ideas, building rapidly, and learning from experiments in a structured way. Nice to Have Experience designing RAG (Retrieval-AugmentedGeneration)/CAG systems using vector databases and hybrid search strategies. Knowledge of LLM vulnerabilities, including adversarial prompting and data poisoning. Experience in designing prompt More ❯
TensorFlow, Transformers, LangChain) Mastery of prompt engineering and fine-tuning Large Language Models Proficient in vector databases (Pinecone, Weaviate, Milvus) and embedding technologies Expert in building RAG (Retrieval-AugmentedGeneration) systems at scale Strong experience with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS SageMaker/Bedrock) Deep understanding of More ❯
TensorFlow, Transformers, LangChain) Mastery of prompt engineering and fine-tuning Large Language Models Proficient in vector databases (Pinecone, Weaviate, Milvus) and embedding technologies Expert in building RAG (Retrieval-AugmentedGeneration) systems at scale Strong experience with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS SageMaker/Bedrock) Deep understanding of More ❯
making. Ensuring AI models are scalable and efficient for real-world enterprise deployment. Experimenting with different machine learning and GenAI techniques, including prompt engineering, RAG (RetrievalAugmentedGeneration), fine-tuning of LLMs, RLHF (reinforcement learning with human feedback), and adversarial techniques. Evaluating AI model performance using statistical and business-driven metrics. Working on natural More ❯
and US investors . Our founders have delivered cutting-edge AI at world-class research labs and high-growth technology companies. Now, operating in stealth, we apply next-generation agentic AI to overhaul mission-critical enterprise workflows that still depend on error-prone, manual processes. Our vision is to bring these high-value operations into the modern era … event buses (Kafka, Pulsar). Wrangle large, heterogeneous data sets —model, transform, and index multi-modal, multi-terabyte enterprise datasets for advanced AI workloads Develop enterprise-level next generation AI systems with the support of our AI specialists Ship complete customer features - from architecture and code to CI/CD, infra-as-code (Terraform), rollout, and user training. … contract. Thrive in an early-stage, high-ownership environment—prototype today, deploy tomorrow, iterate next week. Bonus Points Experience deploying or consuming LLM-powered services (OpenAI, open-source models, RAG, vector stores) can be a bonus. However, we consider many great candidates without previous AI experience. What we're offering: Base salary from £115,000 - £135,000. .. plus meaningful More ❯
Nottingham, Nottinghamshire, United Kingdom Hybrid / WFH Options
Experian Group
Fluent in English (written and spoken). (For senior candidates) Ability to manage projects and lead teams. Good to Have Knowledge or hands-on experience with Generative AI, LLMs, RAG, prompt engineering, and information retrieval. Familiarity with credit-related topics and regulatory frameworks like Basel and IFRS 9. Why Join Us? Work on impactful projects shaping the future of financial More ❯
integrated into enterprise applications to enhance user experience, decision-making, and automation. Exposure to modern AI application patterns such as: Retrieval-AugmentedGeneration (RAG) for augmenting LLMs with domain-specific knowledge. Prompt engineering and fine-tuning for tailoring model behavior to business-specific contexts. Use of embedding stores and vector databases (e.g., Pinecone, Redis More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Futuria
infrastructure Working knowledge of Kubernetes, security best practices, and cloud platforms (AWS, GCP, or Azure) Desirable: Experience with prompt engineering, Retrieval-AugmentedGeneration (RAG), and graph databases Familiarity with multi-agent LLM systems and agentic platforms (e.g., AutoGen, CrewAI), and experience deploying LLM-based applications Experience with tools such as LangChain, LangSmith, or Chainlit More ❯
for someone who wants to be part of a founding team. The Role: Develop and deploy LLM-based solutions tailored to specific business needs (e.g., chatbots, summarization, content generation, semantic search) Fine-tune and customize pre-trained LLMs for targeted applications Conduct prompt engineering, few-shot learning, and to optimise model performance Build pipelines for scalable model training … hands-on experience with LLMs Proficiency with LLM frameworks such as Hugging Face Transformers, OpenAI API, LangChain, or similar Experience fine-tuning large transformer models or implementing retrieval-augmentedgeneration systems Strong Python programming skills and familiarity with ML libraries (e.g., PyTorch, TensorFlow) Knowledge of prompt engineering best practices and prompt optimization Understanding of More ❯
healthcare and cutting-edge LLM technology, shipping fast and solving meaningful problems every day. What You’ll Own Architect and develop backend microservices (Python/FastAPI) that power our RAG pipelines and analytics Build scalable infrastructure for retrieval and vector search (PGVector, Pinecone, Weaviate) Design evaluation frameworks to improve search accuracy and reduce hallucinations Deploy and manage services … LlamaIndex What We’re Looking For 5+ years building production-grade backend systems (preferably in Python) Strong background in search, recommender systems, or ML infrastructure at scale Experience with RAG architectures, embeddings, and vector search Confidence working across GCP (or AWS/Azure) and infrastructure-as-code Familiarity with observability, performance tuning, and secure data practices A growth mindset, startup More ❯
working with Generative AI on a wide range of challenges in a fast-moving environment. Minimum Qualifications & Experience Strong understanding of the theory of generative AI systems e.g., LLMs, RAG, Graph networks Strong experience deploying LLM’s for searching pipelines Up to date with current LLM and NLP research Experience designing, developing and deploying production machine learning pipelines Strong background More ❯
for tangible business outcomes Deep, hands-on understanding of machine learning, agentic systems, and generative AI. Practical knowledge of the AI landscape: architectural trade-offs (e.g., fine-tuned vs. RAG), mitigating hallucination, and technology selection for specific use cases. Proven ability to define technical vision and strategy for new technology initiatives. Shape plans, create reusable architectural patterns and frameworks for More ❯
machine learning, data science or a related STEM field (degree or equivalent experience). Hands-on experience developing and deploying production-grade ML models, including advanced RAG (retrieval-augmentedgeneration) systems. Deep expertise in Generative AI, LLMs, NLP, and Knowledge Graphs—with a track record of translating complex models into real-world business solutions. More ❯
machine learning, data science or a related STEM field (degree or equivalent experience). Hands-on experience developing and deploying production-grade ML models, including advanced RAG (retrieval-augmentedgeneration) systems. Deep expertise in Generative AI, LLMs, NLP, and Knowledge Graphs-with a track record of translating complex models into real-world business solutions. More ❯
and leveraging both structured and unstructured data sources. Experimenting with the integration and fine-tuning of models with Vector databases and embeddings to support semantic search, RAG (retrieval-augmentedgeneration), and domain-specific applications. Working within a Data Mesh architecture, collaborating across domains to ensure scalable, interoperable data products; containerising solutions with Docker and More ❯
and leveraging both structured and unstructured data sources. Experimenting with the integration and fine-tuning of models with Vector databases and embeddings to support semantic search, RAG (retrieval-augmentedgeneration), and domain-specific applications. Working within a Data Mesh architecture, collaborating across domains to ensure scalable, interoperable data products; containerising solutions with Docker and More ❯