for data quality, lineage, privacy, and security, ensuring our AI systems are developed and used responsibly and ethically. Tooling the Future: Get hands-on with cutting-edge technologies like HuggingFace, PyTorch, TensorFlow, Apache Spark, Apache Airflow, and other modern data and ML frameworks. Collaborate and Lead: Partner closely with ML Engineers, Data Scientists, and Researchers to understand … Data Governance & Ethics: Experience implementing data governance frameworks, ensuring data quality, privacy, and compliance, with an awareness of ethical AI considerations. Bonus Points If You Have: Direct experience with HuggingFace ecosystem, PyTorch, or TensorFlow for data preparation in an ML context. Experience with real-time data streaming architectures. Familiarity with containerization (Docker, Kubernetes). Master's or More ❯
our product and community. Engineering Design, develop, and optimize AI/ML features in Qdrant's core engine and SDKs. Prototype and implement integrations with popular ML frameworks (e.g., HuggingFace, OpenAI, LangChain). Analyze performance, identify bottlenecks, and implement scalable solutions in real-world AI pipelines. Collaborate with product and engineering teams to define and deliver impactful … inform product development. Requirements Strong proficiency in Python. Solid understanding of machine learning concepts, embeddings, and vector search. Experience with at least one modern ML framework (e.g., PyTorch, TensorFlow, HuggingFace). Excellent communication skills; ability to explain technical topics to diverse audiences. Prior experience contributing to open-source projects or engaging with developer communities. Comfortable presenting and More ❯
Qualifications and experience we consider to be essential for the role: Programming & Libraries: Deep proficiency in Python and extensive experience with relevant AI/ML/NLP libraries (e.g., HuggingFace Transformers, spaCy, NLTK). LLM Expertise: Proven experience developing applications leveraging state-of-the-art LLMs (e.g., GPT series, Llama series, Mistral, Claude) including prompt engineering, fine More ❯
Gemini, Llama, Falcon, Mistral. Model performance and optimization: Fine-tuning and optimizing LLMs for quality, latency, sustainability, and cost. Programming and NLP tools: Advanced Python, frameworks like PyTorch, TensorFlow, HuggingFace, LangChain. MLOps and deployment: Docker, Kubernetes, Azure ML Studio, MLFlow. Cloud and AI infrastructure: Experience with Azure Cloud for scalable deployment. Databases and data platforms: SQL, NoSQL More ❯
principles and version control (Git) Experience working in cloud environments (AWS, GCP, or Azure) Ability to work independently and communicate effectively in a remote team Bonus Points Experience with HuggingFace Transformers , LangChain , or RAG pipelines Knowledge of MLOps tools (e.g., MLflow, Weights & Biases, Docker, Kubernetes) Exposure to data engineering or DevOps practices Contributions to open-source AI More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
principles and version control (Git) Experience working in cloud environments (AWS, GCP, or Azure) Ability to work independently and communicate effectively in a remote team Bonus Points Experience with HuggingFace Transformers , LangChain , or RAG pipelines Knowledge of MLOps tools (e.g. MLflow, Weights & Biases, Docker, Kubernetes) Exposure to data engineering or DevOps practices Contributions to open-source AI More ❯
data lake architectures, data integration, and data governance, and at least 2 years of experience with cloud-based AI/ML technologies (such as tools from AWS, Azure, Google, HuggingFace, OpenAI and Databricks) building ML or applied AI solutions. A passion for Generative AI, and an understanding of strengths and weaknesses of Generative LLM's Fundamental knowledge of ML, and More ❯
and implemention synchronous, asynchronous and batch data processing operations Expert level programming skills in Python, along with experience in using relevant tools and frameworks such as PyTorch, FastAPI and Huggingface; strong programming skills in Java are a plus Expert level know-how of ML Ops systems, data pipeline design and implementation, and working with ML platforms (preferably AWS SageMaker) Strong More ❯
s degree in AI and/or Computer Science; Hands-on experience integrating LLM APIs (e.g. OpenAI, HuggingFace Inference); Practical experience fine-tuning LLMs via OpenAI, HuggingFace or similar APIs; Strong proficiency in Python; Deep expertise in prompt engineering and tooling like LangChain or LlamaIndex; Proficiency with vector databases (Pinecone, FAISS, Weaviate) and document embedding pipelines; Proven More ❯
or related technical field 2+ years of experience in AI/ML development with a focus on practical applications Strong proficiency in Python and relevant AI libraries (TensorFlow, PyTorch, HuggingFace) Hands-on experience with workflow automation platforms like N8N, AirTable, and proven track-record. Experience with AI agent development and testing methodologies using Google ADK, LangGraph, Llamaindex More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
commercial experience delivering AI/ML projects end-to-end in production environments Strong Python skills with hands-on use of ML libraries like Scikit-learn, TensorFlow, PyTorch, or HuggingFace Solid understanding of machine learning fundamentals and performance evaluation techniques Experience working in cloud platforms (AWS, GCP, or Azure) with MLOps tools (e.g. MLflow, SageMaker, Vertex AI More ❯
Practical knowledge of AI/ML technologies and their applications, including understanding language models (LLMs), and experience with training and fine-tuning LLMs using frameworks like TensorFlow, PyTorch, or HuggingFace Transformers. Good understanding of programming/scripting: (e.g., Python, Go) for customizing solutions, creating scripts, or automating tasks. Experience with AI relevant infrastructure, including Networking (InfiniBand and More ❯
low-latency, high-throughput data pipelines and APIs using Go, Python, or similar. Hands-on NLP experience with both pre-built services (e.g., AWS Comprehend) and custom transformer models (HuggingFace, PyTorch, TensorFlow) with a strong grounding in evaluating NLP models using classification and ranking metrics, and experience running A/B or offline benchmarks. Proficient with MLOps More ❯
real-world problems into working software. 🛠️ What You Bring: Fluency in Python and strong grasp of ML/DL concepts. Experience with frameworks like PyTorch, TensorFlow, Scikit-learn, and HuggingFace Transformers. Comfort designing production-ready AI systems—optimising for speed, cost, and value. Passion for leveraging AI assistants and agentic tools to maximise impact. Bonus points if you’ve played More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Omnis Partners
real-world problems into working software. 🛠️ What You Bring: Fluency in Python and strong grasp of ML/DL concepts. Experience with frameworks like PyTorch, TensorFlow, Scikit-learn, and HuggingFace Transformers. Comfort designing production-ready AI systems—optimising for speed, cost, and value. Passion for leveraging AI assistants and agentic tools to maximise impact. Bonus points if you’ve played More ❯
publications in top-tier AI and neuroscience conferences (NeurIPS, ICLR, ICML, AAAI, CVPR, Cosyne, SFN, CNN ecc) or peer reviewed journals Familiarity with deep learning libraries such as Pytorch, Huggingface, Transformers, Accelerator and Diffuser. Hands-on experience in training and fine-tuning generative models like diffusion models or large language models such as GPTs and LLAMAs. Experience with data and More ❯
training frameworks, libraries and tools. Deep knowledge of state-of-the-art transformer and non-transformer modifications aimed at enhancing intelligence, efficiency and scalability. Strong expertise in PyTorch and HuggingFace libraries with practical experience in model development, continual pretraining, and deployment. More ❯
for transformer architectures as well as alternative approaches. Your expertise should emphasize techniques that enhance model intelligence, efficiency, and scalability within fine-tuning workflows. Strong expertise in PyTorch and HuggingFace libraries with practical experience in developing fine-tuning pipelines, continuously adapting models to new data, and deploying these refined models in production on target platforms. Demonstrated ability More ❯
real-time ML applications Maintain and improve recommender systems Build and work with modern API and microservices architectures Qualifications Strong foundation in Python Experience with machine learning, familiar with Huggingface, Pytorch, and similar ML tools and packages Familiarity with deploying and scaling ML models in the cloud, particularly with AWS and SageMaker Understanding of DevOps processes and tools: CI/ More ❯
in the upside of an ultra-growth venture. Have fun Apply if: You have experience with our stack: Python, FastAPI, Postgres, SQLAlchemy, Alembic, Typescript, React, LlamaIndex/LangGraph, PyTorch, HuggingFace, OpenAI, Docker, Azure, You have taken entire products or features from ideation to deployment and you've measured their impact. You enjoy diving deep into the domain, understanding the problem More ❯
East London, London, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions
Python and large language models. You’ll work on fine-tuning, integrating, and scaling LLMs for real-world applications. Key Skills: Strong Python engineering background Experience with LLMs (e.g. HuggingFace, OpenAI, LangChain) Model fine-tuning, RAG pipelines, vector databases (e.g. FAISS, Pinecone) Cloud (AWS/GCP), CI/CD, Docker Bonus: Knowledge of model optimization, quantization, or More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions
Python and large language models. You’ll work on fine-tuning, integrating, and scaling LLMs for real-world applications. Key Skills: Strong Python engineering background Experience with LLMs (e.g. HuggingFace, OpenAI, LangChain) Model fine-tuning, RAG pipelines, vector databases (e.g. FAISS, Pinecone) Cloud (AWS/GCP), CI/CD, Docker Bonus: Knowledge of model optimization, quantization, or More ❯
Central London / West End, London, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions
Python and large language models. You’ll work on fine-tuning, integrating, and scaling LLMs for real-world applications. Key Skills: Strong Python engineering background Experience with LLMs (e.g. HuggingFace, OpenAI, LangChain) Model fine-tuning, RAG pipelines, vector databases (e.g. FAISS, Pinecone) Cloud (AWS/GCP), CI/CD, Docker Bonus: Knowledge of model optimization, quantization, or More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
prompt engineering strategies and orchestration flows Integrate models into backend services via APIs Implement evaluation frameworks for response quality and reliability Essential Skills Strong Python skills and experience with HuggingFace Transformers Familiarity with LLM fine-tuning and inference optimisation Experience with vector search and embeddings (e.g. FAISS, Pinecone) Understanding of prompt engineering and few-shot learning Ability More ❯