to detail, and a collaborative, growth-focused mindset. Experience working in agile, product-driven engineering teams. Preferred Qualifications: Exposure to Retrieval-Augmented Generation (RAG) pipelines, vector databases (e.g., Pinecone, Weaviate, Milvus), and knowledge bases, with familiarity in integrating them with LLMs. Experience with advanced model monitoring, observability, and governance of LLMs and generative AI systems. Experience with data engineering or More ❯
vector databases), AutoGPT Data Engineering & ML Pipelines: Apache Airflow, MLflow, Kubeflow, dbt, Prefect Cloud & Deployment Platforms: AWS SageMaker, Azure ML, Google Vertex AI APIs & Orchestration: OpenAI API, Anthropic Claude, Weaviate, FastAPI (for AI applications) MLOps & Experimentation: Weights & Biases, DVC (Data Version Control), Docker, Kubernetes General 2+ years of professional experience in relevant fields. Experience mentoring, coaching, or teaching others in More ❯
/services. Strong skills in MLOps: containerisation (Docker, Kubernetes), cloud deployment (AWS, GCP, Azure), and CI/CD pipelines. Experience with prompt engineering, LLM evaluation, and vector databases (Pinecone, Weaviate, FAISS). Excellent communication skills and cross-functional collaboration experience. Benefits: Our team at OneClickComply is central to our 100% customer satisfaction, and we offer some of the best benefits More ❯
modern web frameworks Deep experience with AI/ML frameworks (PyTorch, TensorFlow, Transformers, LangChain) Mastery of prompt engineering and fine-tuning Large Language Models Proficient in vector databases (Pinecone, Weaviate, Milvus) and embedding technologies Expert in building RAG (Retrieval-Augmented Generation) systems at scale Strong experience with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS SageMaker More ❯
modern web frameworks Deep experience with AI/ML frameworks (PyTorch, TensorFlow, Transformers, LangChain) Mastery of prompt engineering and fine-tuning Large Language Models Proficient in vector databases (Pinecone, Weaviate, Milvus) and embedding technologies Expert in building RAG (Retrieval-Augmented Generation) systems at scale Strong experience with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS SageMaker More ❯
building with generative AI applications in production environments. Expertise with microservices architecture and RESTful APIs. Solid understanding of database technologies such as PostgreSQL and vector databases as Elastic, Pinecone, Weaviate, or similar. Familiarity with cloud platforms (AWS, GCP, etc.) and containerized environments (Docker, Kubernetes). You are committed to writing clean, maintainable, and scalable code, following best practices in software More ❯
language models and enthusiasm for solving real-world product challenges Clear communication and ability to thrive in a distributed team Nice to Have: Experience with vector databases (e.g., FAISS, Weaviate, pgvector) Knowledge of survey data structures or market research workflows Familiarity with statistics (e.g., weighting, significance testing) Experience with Docker, Hugging Face Transformers, or cloud-based deployment Awareness of data More ❯
background in Computer Science and Software Development Experience with complex RAG pipelines (without using Langchain or LlamaIndex) Tech stack includes: Foundation and open-source LLMs, vector databases (Pinecone, Qdrant, Weaviate), embeddings, Next.js, Vercel, MongoDB, Cohere reranking, multimodal parsing (e.g., Unstructured.io) This is a project-based, part-time, contract role. Must provide work samples (GitHub). Must be based in US More ❯
and maintain AI microservices using Docker, Kubernetes, and FastAPI, ensuring smooth model serving and error handling; Vector Search & Retrieval: Implement retrieval-augmented workflows: ingest documents, index embeddings (Pinecone, FAISS, Weaviate), and build similarity search features. Rapid Prototyping: Create interactive AI demos and proofs-of-concept with Streamlit, Gradio, or Next.js for stakeholder feedback; MLOps & Deployment: Implement CI/CD pipelines … tuning LLMs via OpenAI, HuggingFace or similar APIs; Strong proficiency in Python; Deep expertise in prompt engineering and tooling like LangChain or LlamaIndex; Proficiency with vector databases (Pinecone, FAISS, Weaviate) and document embedding pipelines; Proven rapid-prototyping skills using Streamlit or equivalent frameworks for UI demos. Familiarity with containerization (Docker) and at least one orchestration/deployment platform; Excellent communication More ❯
Our benefits Share Options (EMI) scheme 25 days annual leave, plus flexible bank holidaysand the opportunity to buy additional days Enhanced workplace Pension scheme - opt in salary sacrifice scheme Life Insurance (3x annual salary) Employee Assistance Programme (EAP) and workplace More ❯