Stay up-to-date with new technologies and best practices in data engineering, advancements in generative AI, transformer architectures, and retrieval-augmentedgeneration (RAG) techniques. Ensure the data security standards are met, in conjunction with the Information Security team Manage AI/ML projects and mentor junior team members Experience: Extensive experience in data More ❯
source control and the Azure cloud. In this role you will You'll have the opportunity to work on a mixture of the following: Generative AI Design and develop RAG based applications. LLM fine-tuning, including preparation of training sets from internal data Agent based applications Evaluating use-case specific LLMs AI/ML NLP: Named Entity Recognition across a … looking for professionals with these required skills to achieve our goals: Bachelor's degree in computer science Extensive experience working in AI/ML Generative AI: Demonstratable experience of RAG, including chunking strategies, vectorising and indexing data, retrieval strategies and reranking, prompting strategies, function calling. Our current tech-stack is OpenAI, LangChain, Azure AI, Python, pg_vector, Sinequa. More ❯
role, you'll work closely with ML scientists, data engineers, and product teams to help bring innovative solutions-such as retrieval-augmentedgeneration (RAG) systems, multi-agent architectures , and AI agent workflows -into production. As a Senior Machine Learning Engineer, you'll play a key role in developing and integrating cutting-edge AI solutions … a highly collaborative and fast-moving environment where your contributions will directly shape both the future of our platform and your own growth. Key Responsibilities Design, build, and deploy RAG systems , including multi-agent and AI agent-based architectures for production use cases. Contribute to model development processes including fine-tuning, parameter-efficient training (e.g., LoRA, PEFT), and distillation . … related technical discipline. Strong foundation in machine learning and data science fundamentals -including supervised/unsupervised learning, evaluation metrics, data preprocessing, and feature engineering. Proven experience building and deploying RAG systems and/or LLM-powered applications in production environments. Proficiency in Python and ML libraries such as PyTorch, Hugging Face Transformers , or TensorFlow. Experience with vector search tools (e.g. More ❯
robust agentic workflows enabling AI agents to interact autonomously with data sources and external APIs using advanced prompt engineering and retrieval-augmentedgeneration (RAG) Fine-tune and optimize pre-trained large language models and multi-modal models for targeted use cases, ensuring high performance and low latency in production. Implement distributed training and scalable More ❯
techniques including test-driven development (TDD), Behaviour-driven development (BDD), integration testing and performance testing Some experience with AI tools (one of more of) Python, LLM(Large Language models), RAG, Langchain Set Yourself apart with: Bachelor's/Master's degree in Computer Science or related field Experienceof working on large scale, complex, and distributed applications in an Agile environment More ❯
Haywards Heath, Sussex, United Kingdom Hybrid / WFH Options
First Central Services
AI expert ready to take your skills to the next level? Do words like Azure OpenAI, Cognitive Services, prompt engineering, Retrieval-AugmentedGeneration (RAG) architectures, vector stores, and API integrations make you light up inside? If so, we want to hear from you! At 1st Central , we're on an exciting journey with AI … develop AI and Generative AI solutions using services like Azure OpenAI and Azure Cognitive Services Implement prompt engineering techniques and Retrieval-AugmentedGeneration (RAG) architectures. Ensure scalability, security, auditability, and efficiency of AI solutions through detailed system design and development practices. Deploy and manage AI solutions via CI/CD pipelines in Azure DevOps … deploying, and managing production-grade AI and Generative AI systems. Extensive experience with Cloud-based AI and Cognitive Services, and Retrieval-AugmentedGeneration (RAG) architectures. Deep expertise in API integration, preferably within the Azure ecosystem. Experience with Infrastructure as Code (IaC) across development, testing, and production environments. Solid understanding of Azure networking principles, security More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
First Central Services
AI expert ready to take your skills to the next level? Do words like Azure OpenAI, Cognitive Services, prompt engineering, Retrieval-AugmentedGeneration (RAG) architectures, vector stores, and API integrations make you light up inside? If so, we want to hear from you! At 1st Central , we're on an exciting journey with AI … develop AI and Generative AI solutions using services like Azure OpenAI and Azure Cognitive Services Implement prompt engineering techniques and Retrieval-AugmentedGeneration (RAG) architectures. Ensure scalability, security, auditability, and efficiency of AI solutions through detailed system design and development practices. Deploy and manage AI solutions via CI/CD pipelines in Azure DevOps … deploying, and managing production-grade AI and Generative AI systems. Extensive experience with Cloud-based AI and Cognitive Services, and Retrieval-AugmentedGeneration (RAG) architectures. Deep expertise in API integration, preferably within the Azure ecosystem. Experience with Infrastructure as Code (IaC) across development, testing, and production environments. Solid understanding of Azure networking principles, security More ❯
APIs, or other LLM orchestration tools. A solid understanding of tokenization, embedding models, vector databases (e.g., Pinecone, Weaviate, FAISS), and retrieval-augmentedgeneration (RAG) pipelines. Experience designing and evaluating LLM-powered systems such as chatbots, summarization tools, content generation workflows, or intelligent data extraction pipelines. Deep understanding of NLP fundamentals: text preprocessing More ❯
APIs, or other LLM orchestration tools. A solid understanding of tokenization, embedding models, vector databases (e.g., Pinecone, Weaviate, FAISS), and retrieval-augmentedgeneration (RAG) pipelines. Experience designing and evaluating LLM-powered systems such as chatbots, summarization tools, content generation workflows, or intelligent data extraction pipelines. Deep understanding of NLP fundamentals: text preprocessing More ❯
to the highest standards. You'll be part of a multidisciplinary team focused on delivering enterprise-grade AI capabilities, including generative AI, agentic AI, LLMs and RAG (retrieval-augmentedgeneration). The AI Engineer will optimise prompts to generative AI models across NiCE's Proactive AI Agent applications, working with several groups in the More ❯
Sheffield, Yorkshire, United Kingdom Hybrid / WFH Options
Educations Media Group
competencies, success in this role hinges on specific, hands-on experience in the following areas: Generative AI Applications: Demonstrable experience designing, building, and deploying applications leveraging Generative AI techniques. RAG Pipelines: Deep understanding and practical experience in developing and optimising Retrieval-AugmentedGeneration (RAG) pipelines. GenAI Frameworks: Hands-on experience with key frameworks such … APIs (OpenAI, Gemini) and ideally experience handling open-source models. Knowledge Graphs for AI: Experience utilising Knowledge Graphs (Neo4j preferred) as part of AI architectures, as part of modern RAG systems. Highly Desirable: Experience building B2C applications (e.g., chatbots), exploring Agentic AI patterns. What can we offer you? Freedom to help, plan and lead AI/ML architecture decisions in More ❯
libraries such as TensorFlow or PyTorch. Experience working with LLMs (Gemini), prompt engineering, and reinforcement learning from human feedback (RLHF). Experience with LangChain for building LLM applications with RAG pipelines and agent workflows. Practical understanding of vector search, embeddings, and retrieval-augmentedgeneration (RAG). Experience building and deploying machine learning models into More ❯
work closely with product and domain experts to identify compelling solutions at the intersection of user needs and technical feasibility. Our team is responsible for designing the next generation of risk and fraud investigation software. We own AI innovation for Thomson Reuters' core Risk and Fraud products, including CLEAR , CLEAR Adverse Media , and CLEAR Risk Inform . About … NLP/ML/Knowledge Graph/GenAI systems for commercial applications Practical experience with traditional and state-of-the-art NLP methods, Knowledge Graph algorithms, and GenAI (including RAG and agentic frameworks) Experience writing production code and ensuring well-managed software delivery Demonstrable experience translating complex problems into successful AI applications Outstanding communication, problem-solving, and analysis skills Collaborating More ❯
IT, Big Data, Security & Privacy, Digital, and Business Unit teams to ensure integration, compliance, and delivery readiness Technology Leadership: Evaluate and leverage modern AI/GenAI capabilities including LLMs, RAG pipelines, MLOps, and cloud-native tooling Governance and Compliance: Ensure solutions adhere to enterprise governance, responsible AI principles, data protection regulations, and security best practices Value Realization: Work with business … complex environments Hands-on experience delivering AI/GenAI solutions in networks, telecommunications, or customer experience domains is strongly preferred Deep understanding of architecture patterns for LLMs, NLP, MLOps, RAG, APIs, and real-time data integration Strong background in working with cloud platforms (GCP, AWS, Azure) and big data technologies (e.g., Kafka, Spark, Snowflake, Databricks) Demonstrated ability to work across More ❯
Python Proficiency : Strong skills with ML frameworks (TensorFlow, PyTorch) and LLM tools (Hugging Face, LangChain, OpenAI APIs) Vector Understanding : Solid knowledge of embeddings, vector databases (Pinecone, Weaviate, FAISS), and RAG pipelines NLP Fundamentals : Text preprocessing, language modelling, and semantic similarity Cloud Experience : AWS ecosystem knowledge (SageMaker, Lambda, etc.) Production Ready : ETL pipelines, version control, Agile methodologies It would be a More ❯
Have an advanced degree in Computer Science, Mathematics or a similar quantitative discipline Understanding of NLP algorithms and techniques and/or experience with Large Language Models (fine tuning, RAG, agents) Are proficient with Python, including open-source data libraries (e.g Pandas, Numpy, Scikit learn etc.) Have experience productionising machine learning models Are an expert in one of predictive modeling More ❯
Practical knowledge in data wrangling, handling, processing, integrating, and analyzing large heterogeneous data sets related to drug discovery. LLM Experience: Experience with LLMs (fine-tuning, pretraining, continued pretraining, inference, RAG, and building multi-agent workflows using Llamaindex, Langchain, LangGraph, vector databases, etc.). Production-Grade Models: Significant expertise in building production-grade machine learning models in industry and/or More ❯
of AI engineers. Represent AI in senior product, engineering, and vendor forums. Generative AI Delivery Lead design, prototyping, and deployment of GenAI use cases (e.g. co-pilots, AI agents, RAG systems). Establish scalable LLMOps practices including model evaluation, governance, and lifecycle automation. Maintain awareness of emerging models and integration strategies. Machine Learning Engineering: Support the ML Engineer in model … Required Qualifications: Proven experience building and deploying GenAI applications in production. Strong hands-on knowledge of LLMs, prompt engineering, and retrieval-augmentedgeneration (RAG). Practical experience with traditional ML, including data pipelines and MLOps workflows. Working knowledge of statistical modelling and experimentation. Proficiency in Python and at least one additional general-purpose language. More ❯
rapid iteration, prompt engineering, and practical application. You'll fine-tune and optimize foundation models, craft sophisticated multi-agent systems, and invent novel solutions to power the next generation of voice intelligence. What You'll Do Integrate AI solutions into existing products and workflows Collaborate with cross-functional teams to understand business requirements and translate them into technical … AWS, Google Cloud, or Azure Knowledge of Kubernetes and containerization technologies Experience with data science and ML engineering Familiarity with retrieval-augmentedgeneration (RAG) The requirements listed in the job descriptions are guidelines. You don't have to satisfy every requirement or meet every qualification listed. If your skills are transferable we would still More ❯
TensorFlow, Transformers, LangChain) Mastery of prompt engineering and fine-tuning Large Language Models Proficient in vector databases (Pinecone, Weaviate, Milvus) and embedding technologies Expert in building RAG (Retrieval-AugmentedGeneration) systems at scale Strong experience with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS SageMaker/Bedrock) Deep understanding of More ❯
TensorFlow, Transformers, LangChain) Mastery of prompt engineering and fine-tuning Large Language Models Proficient in vector databases (Pinecone, Weaviate, Milvus) and embedding technologies Expert in building RAG (Retrieval-AugmentedGeneration) systems at scale Strong experience with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS SageMaker/Bedrock) Deep understanding of More ❯
making. Ensuring AI models are scalable and efficient for real-world enterprise deployment. Experimenting with different machine learning and GenAI techniques, including prompt engineering, RAG (RetrievalAugmentedGeneration), fine-tuning of LLMs, RLHF (reinforcement learning with human feedback), and adversarial techniques. Evaluating AI model performance using statistical and business-driven metrics. Working on natural More ❯
teammates, and fueling your curiosity for the latest trends in LLMs, MLOps, and ML more broadly. The impact you will have: Develop LLM solutions on customer data such as RAG architectures on enterprise knowledge repos, querying structured data with natural language, and content generation Build, scale, and optimize customer data science workloads and apply best in class MLOps … in Databricks Collaborate cross-functionally with the product and engineering teams to define priorities and influence the product roadmap What we look for: Experience building Generative AI applications, including RAG, agents, text2sql, fine-tuning, and deploying LLMs, with tools such as HuggingFace, Langchain, and OpenAI Extensive hands-on industry data science experience, leveraging typical machine learning and data science tools More ❯
communication of complex ideas. Ability to work independently and collaboratively. Preferred Skills: Experience building scalable applications with LLMs using frameworks like LangChain, LlamaIndex, Hugging Face, etc. Deep knowledge of RAG implementation and enhancements. Benefits & perks (UK full-time employees): Generous PTO and holidays. Comprehensive medical and dental insurance. Paid parental leave (12 weeks). Fertility and family planning support. Early More ❯
cost effective machine learning (ML) model inference to Search workflows. ML inference has become a crucial part of the modern search experience whether used for query understanding, semantic search, RAG, or any other GenAI use-case. Our goal is to simplify ML inference in Search workflows by focusing on large scale inference capabilities for embeddings and reranking models that are More ❯