following database systems - DynamoDB, DocumentDB, MongoDB Demonstrated expertise in unit testing and tools - JUnit, Mockito, PyTest, Selenium. Strong working knowledge of the PyData stack - pandas, NumPy for data manipulation; Jupyter Notebooks for experimentation; matplotlib/Seaborn for basic visualisation. Experience with data analysis and troubleshooting data-related issues. Knowledge of design patterns and software architectures Familiarity with CI/CD More ❯
knowledge of React (not a frontend role, but an understanding of the stack is important) Hands-on experience with containerisation (Docker) and cloud deployment (Terraform, microservices, Azure) Exposure to Jupyter notebooks , and understanding of how machine learning models are developed and deployed Experience in fast-paced or start-up environments where you’ve contributed across the stack Background & Education Degree More ❯
knowledge of React (not a frontend role, but an understanding of the stack is important) Hands-on experience with containerisation (Docker) and cloud deployment (Terraform, microservices, Azure) Exposure to Jupyter notebooks , and understanding of how machine learning models are developed and deployed Experience in fast-paced or start-up environments where you’ve contributed across the stack Background & Education Degree More ❯
design and deployment. Strong software engineering skills, including version control (Git), code reviews, and unit testing. Familiarity with common data science libraries and tools (e.g., NumPy, Pandas, Scikit-learn, Jupyter). Experience in setting up and managing continuous integration and continuous deployment pipelines. Proficiency with containerization technologies (e.g., Docker, Kubernetes). Experience with cloud services (e.g., AWS, GCP, Azure) for More ❯
and product managers. You can evaluate, analyze and interpret model results resulting in further improvement of existing statistical model performance You can perform complex data analysis using SQL/Jupyter notebook to find underlying issues and propose a solution to stakeholders explaining the various trade-offs associated with the solution. You can use your grit and initiative to fill in More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls and presentations Actively seek out new opportunities More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls, and presentations Actively seek out new opportunities More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
learning models in production environments. API Development: An understanding of REST. Experience with Flask or FastAPI. Data Validation: Knowledge of Pydantic for data validation. Scripting and Prototyping: Use of Jupyter Notebooks for quick prototyping. DevSecOps Practices: Understanding of secure coding and automated testing. Experience with Pytest or a Python testing framework. You'll be able to be yourself; we'll More ❯
Services (S3, EKS, ECR, EMR, etc.) •Experience with containers and orchestration (e.g. Docker, Kubernetes) •Experience with Big Data processing technologies (Spark, Hadoop, Flink etc) •Experience with interactive notebooks (e.g. JupyterHub, Databricks) •Experience with Git Ops style automation •Experience with ix (e.g, Linux, BSD, etc.) tooling and scripting •Participated in projects that are based on data science methodologies, and/or More ❯
AI/ML/Data Science apprenticeship programme. Core Skills & Competencies Technical Skills Programming proficiency in Python and common ML libraries such as TensorFlow, PyTorch, or similar. Experience with Jupyter Notebooks and version control (Git/GitHub). Basic understanding of supervised/unsupervised learning, neural networks, or clustering. Analytical Abilities Ability to interpret data trends, visualize outputs, and debug More ❯
understanding of strengths and weaknesses of Generative LLM's Fundamental knowledge of ML, and basic knowledge of AI, NLP, and Large Language Models (LLM) Comfortable working with Python and Jupyter Notebooks Should have in-depth knowledge and familiarity with cloud platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Technical Skills Good to have: Expertise in More ❯
Playwright or similar testing frameworks. REST APIs: Strong understanding of integrating and working with RESTful services. Data Skills: Experience in data wrangling/analysis (e.g., using SQL or Python, Jupyter Notebook). Collaboration: Experience working in an Agile environment (Scrum/Kanban). Problem-Solving: Strong analytical and troubleshooting skills. Desirable Skills Familiarity with state management libraries (MobX, Redux). More ❯
Cambridge, Cambridgeshire, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
and model evaluation Requirements: 3+ years of experience in data science or ML, ideally in biotech or healthcare Strong Python programming skills and experience with ML libraries Familiarity with Jupyter , Pandas , NumPy , and MLFlow Experience working with clinical or biological datasets is a big plus Comfortable working in a fast-paced, research-driven environment Bonus Skills: Knowledge of genomics , bioinformatics More ❯
Cambridge, Cambridgeshire, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
and model evaluation Requirements: 3+ years of experience in data science or ML, ideally in biotech or healthcare Strong Python programming skills and experience with ML libraries Familiarity with Jupyter , Pandas , NumPy , and MLFlow Experience working with clinical or biological datasets is a big plus Comfortable working in a fast-paced, research-driven environment Bonus Skills: Knowledge of genomics , bioinformatics More ❯
and the Hadoop Ecosystem Edge technologies e.g. NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable for Data Devops Engineer: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology e.g. ActiveMQ NiFi Rego Familiarity with code development, shell-scripting in Python, Bash More ❯
and other Qualtrics products Acquire data from customers (usually sftp or cloud storage APIs) Validate data with exceptional detail orientation (including audio data) Perform data transformations (using Python and Jupyter Notebooks) Load the data via APIs or pre-built Discover connectors Advise our Sales Engineers and customers as needed on the data, integrations, architecture, best practices, etc. Build new AWS More ❯
data analysis. Strong technical skills regarding data analysis, statistics, and programming. Strong working knowledge of, Python, Hadoop, SQL, and/or R. Working knowledge of Python data tools (e.g. Jupyter, Pandas, Scikit-Learn, Matplotlib). Ability to talk the language of statistics, finance, and economics a plus. Profound knowledge of the English language. In a changing world, diversity and inclusion More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Singular Recruitment
include: 3+ years industry experience in a Data Science role and a strong academic background Python Data Science Stack: Advanced proficiency in Python , including pandas , NumPy , scikit-learn , and Jupyter Notebooks . Statistical & ML Modelling: Strong foundation in statistical analysis and proven experience applying a range of machine learning techniques to solve business problems (e.g., regression, classification, clustering, time-series More ❯
DevOps Methodologies: experience of working on Agile projects Good understanding of SOA/Microservices based architectures Good understanding of OOP, SOLID principles and software design patterns Knowledge of Python (Jupyter notebooks) Benefits offered Bonus, Pension (9% non-contributory plus additional matched contributions), 4 x Life Assurance, Group Income Protection, Season Ticket Loan, GAYE, BUPA Private Medical, Private GP, Travel Insurance More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Curo Resourcing Ltd
Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable: Jupyter Hub Awareness RabbitMQ or other common queue technology e.g. ActiveMQ NiFi Rego Familiarity with code development, shell-scripting in Python, Bash etc. To apply for this DV Cleared DevOps Engineer More ❯