grade applications Generative AI: Demonstratable experience of RAG, including chunking strategies, vectorising and indexing data, retrieval strategies and reranking, prompting strategies, function calling. Our current tech-stack is OpenAI, LangChain, Azure AI, Python, pg_vector, Sinequa. AI/ML: Hands on experience with training and evaluating BERT-like models in real-world applications, especially in NLP or classification problems Data More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
Intellect Group
and evaluation of domain-specific LLMs , applying retrieval-augmented generation (RAG) and prompt engineering techniques. Contribute to the development of multi-agent systems using frameworks such as AutoGen , LangGraph , LangChain , or CrewAI . Support the integration of AI safety techniques into system design and deployment. Help implement real-time and batch inference pipelines with secure APIs (REST/gRPC, event More ❯
Qualifications Skills & Expertise Strong experience in machine learning, deep learning, and statistical analysis. Expertise in Python, with proficiency in ML and NLP libraries such as Scikit-learn, TensorFlow, Faiss, LangChain, Transformers and PyTorch. Experience with big data tools such as Hadoop, Spark, and Hive. Familiarity with CI/CD and MLOps frameworks for building end-to-end ML pipelines. Proven More ❯
Bonus Points If I Have... Experience with AI engineering tools and technologies for fine tuning and serving custom LLMs in production and with other Gen AI tools such as, Langchain, LlamaIndex Experience working with Knowledge graphs based on text data Scala experience iM Getting To Join a rapidly evolving, industry-leading SaaS company on an exciting journey of growth and More ❯
Go or Java). Demonstrated experience deploying and maintaining LLMs (e.g., GPT's, Llama) in production environments. Familiarity with frameworks and tooling for LLMs and generative AI (e.g., Transformers, LangChain, Haystack, OpenAI, Vertex AI). Experience operationalizing ML solutions in cloud-native environments (AWS, GCP, Azure). Proficiency with containerization and orchestration (Docker, Kubernetes or similar) for scalable model deployment. More ❯
Statistical testing experience Experience with AWS Bedrock Experience with C# Containerization via Docker. Awareness of basic data science and generative AI methods. Exposure to generative AI application frameworks like LangChain, LlamaIndex, SmolAgents and griptape. What's in it for you? Join an ever-growing, market disrupting, global company where the teams - comprised of the best of the best - work in More ❯
Haywards Heath, Sussex, United Kingdom Hybrid / WFH Options
First Central Services
services or insurance sectors advantageous. Familiarity with AzureML, Databricks, related Azure technologies, Docker, Kubernetes, and containerization is advantageous. Advanced proficiency in Python, and familiarity with AI frameworks such as LangChain Skilled in designing and operationalising AI Ops frameworks within cloud-based production environments. Exceptional communication skills, clearly articulating technical concepts to diverse stakeholders. Excellent organisational, time management, and prioritisation skills. More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
First Central Services
services or insurance sectors advantageous. Familiarity with AzureML, Databricks, related Azure technologies, Docker, Kubernetes, and containerization is advantageous. Advanced proficiency in Python, and familiarity with AI frameworks such as LangChain Skilled in designing and operationalising AI Ops frameworks within cloud-based production environments. Exceptional communication skills, clearly articulating technical concepts to diverse stakeholders. Excellent organisational, time management, and prioritisation skills. More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
control (Git) Experience working in cloud environments (AWS, GCP, or Azure) Ability to work independently and communicate effectively in a remote team Bonus Points Experience with Hugging Face Transformers , LangChain , or RAG pipelines Knowledge of MLOps tools (e.g. MLflow, Weights & Biases, Docker, Kubernetes) Exposure to data engineering or DevOps practices Contributions to open-source AI projects or research publications What More ❯
pipelines for continuous model improvement Collaborate with cross-functional teams-research, product, and engineering-to embed AI capabilities into products and services Evaluate and select appropriate AI frameworks (e.g., LangChain, LlamaIndex) to integrate agent components seamlessly with enterprise systems Build full-stack applications (front-end interfaces and back-end APIs) using modern languages and frameworks (React/Angular, Python, Java More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Catalyst
SQL The ability to build AI-driven solutions using large language models (LLMs), and techniques such as retrieval-augmented generation (RAG) and agent-based approaches, supported by frameworks like LangChain and CrewAI Skills in leveraging AI model APIs (e.g. OpenAI, Anthropic) and rapid prototyping tools such as Streamlit, along with AI-native developer tools like Cursor, to bring ideas to More ❯
priorities and influence the product roadmap What we look for: Experience building Generative AI applications, including RAG, agents, text2sql, fine-tuning, and deploying LLMs, with tools such as HuggingFace, Langchain, and OpenAI Extensive hands-on industry data science experience, leveraging typical machine learning and data science tools including pandas, scikit-learn, and TensorFlow/PyTorch Experience building production-grade machine More ❯
priorities and influence the product roadmap What we look for: Experience building Generative AI applications, including RAG, agents, text2sql, fine-tuning, and deploying LLMs, with tools such as HuggingFace, Langchain, and OpenAI Extensive hands-on industry data science experience, leveraging typical machine learning and data science tools including pandas, scikit-learn, and TensorFlow/PyTorch Experience building production-grade machine More ❯
software development Prior consulting experience essential Strong technical knowledge of GenAI and ML, including LLMs, RAG, MLOps, and prompt engineering Familiarity with platforms such as AWS Bedrock, Google Vertex, LangChain, or LlamaIndex Experience with both legacy systems and modern tech stacks Proven track record in agile delivery and digital transformation Excellent communication, analytical, and stakeholder management skills Willingness to travel More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
XPERT-CAREER LTD
of Docker , CI/CD workflows, and automation pipelines Familiarity with MLOps tooling such as MLFlow, Git version control, and environment management Desirable Skills & Interests: Experience with frameworks like LangChain , Langflow , or similar tools for building AI agents Understanding of Large Language Models (LLMs) and intelligent automation workflows Experience building high-availability, scalable systems using microservices or event-driven architecture More ❯
e.g. databases, software engineering practices, cloud computing - especially AWS) and data science (e.g. machine learning process) Excellent knowledge of Python includingPytorch, Tensorflow andSKLearn as well as initial knowledge of LangChain andRAGAS. Familiarity with CI/CD workflows is required and experience with containerisation and deployment using Docker/Kubernetes will be considered a plus 1+ year experience working in relevant More ❯
as SageMaker, Vertex AI or Azure Machine Learning Studio. Good knowledge of DevOps practices and tools (e.g.: Git, Docker, Kubernetes). Familiarity with AI platforms and frameworks such as LangChain, Llamaindex and, HugginFace. Expertise in data manipulation, data visualisation, and statistical modelling libraries (e.g.: pandas, NumPy, Matplotlib, scikit-learn). Skills in data visualisation tools (e.g.: Tableau, PowerBI). Excellent More ❯
e.g., AWS, Azure, GCP) and demonstrated proficiency in deploying applications within cloud environments. Generative AI Ecosystem Knowledge : Deep understanding of the generative AI ecosystem, including AI orchestration frameworks (e.g., LangChain, Llama Index, Haystack) and cloud provider AI offerings (e.g., AWS Bedrock, Vertex AI, Azure Machine Learning). Data Expertise : Strong foundation in data engineering, data analytics, or data science, with More ❯
systems. Excellent analytical and problem-solving skills. Effective communication of complex ideas. Ability to work independently and collaboratively. Preferred Skills: Experience building scalable applications with LLMs using frameworks like LangChain, LlamaIndex, Hugging Face, etc. Deep knowledge of RAG implementation and enhancements. Benefits & perks (UK full-time employees): Generous PTO and holidays. Comprehensive medical and dental insurance. Paid parental leave More ❯
pre-trained open source models Strong understanding of machine learning workflows, including model evaluations and LLM fine-tuning Familiarity with AI orchestration and agent-based systems and best practices (LangChain, AutoGen, n8n) Excellent problem-solving skills and the ability to work independently and collaboratively. Strong communication skills and the ability to translate technical concepts to non-technical stakeholders The person More ❯
Have good communication skills. Nice to have Experience deploying LLMs and agent-based systems Our technology stack Python and associated ML/DS libraries (scikit-learn, numpy, pandas, LightGBM, LangChain/LangGraph, TensorFlow, etc ) PySpark AWS cloud infrastructure: EMR, ECS, ECR, Athena, etc. MLOps: Terraform, Docker, Spacelift, Airflow, MLFlow Monitoring: New Relic CI/CD: Jenkins, Github Actions More information More ❯
Have good communication skills. Nice to have Experience deploying LLMs and agent-based systems Our technology stack Python and associated ML/DS libraries (scikit-learn, numpy, pandas, LightGBM, LangChain/LangGraph, TensorFlow, etc...) PySpark AWS cloud infrastructure: EMR, ECS, ECR, Athena, etc. MLOps: Terraform, Docker, Spacelift, Airflow, MLFlow Monitoring: New Relic CI/CD: Jenkins, Github Actions More information More ❯
modular, distributed, and asynchronous systems Solid experience leading full-stack application development teams Expert-level proficiency in Python and hands-on experience with at least one LLM based framework (LangChain, LangGraph, LangSmith, LlamaIndex, Qdrant, etc ) Strong experience with asynchronous queues (e.g., Kafka, RabbitMQ) and asynchronous APIs Deep understanding of cloud infrastructure (AWS, GCP) and experience deploying and managing applications at More ❯