following database systems - DynamoDB, DocumentDB, MongoDB Demonstrated expertise in unit testing and tools - JUnit, Mockito, PyTest, Selenium. Strong working knowledge of the PyData stack - pandas, NumPy for data manipulation; Jupyter Notebooks for experimentation; matplotlib/Seaborn for basic visualisation. Experience with data analysis and troubleshooting data-related issues. Knowledge of design patterns and software architectures Familiarity with CI/CD More ❯
knowledge of React (not a frontend role, but an understanding of the stack is important) Hands-on experience with containerisation (Docker) and cloud deployment (Terraform, microservices, Azure) Exposure to Jupyter notebooks , and understanding of how machine learning models are developed and deployed Experience in fast-paced or start-up environments where you’ve contributed across the stack Background & Education Degree More ❯
knowledge of React (not a frontend role, but an understanding of the stack is important) Hands-on experience with containerisation (Docker) and cloud deployment (Terraform, microservices, Azure) Exposure to Jupyter notebooks , and understanding of how machine learning models are developed and deployed Experience in fast-paced or start-up environments where you’ve contributed across the stack Background & Education Degree More ❯
design and deployment. Strong software engineering skills, including version control (Git), code reviews, and unit testing. Familiarity with common data science libraries and tools (e.g., NumPy, Pandas, Scikit-learn, Jupyter). Experience in setting up and managing continuous integration and continuous deployment pipelines. Proficiency with containerization technologies (e.g., Docker, Kubernetes). Experience with cloud services (e.g., AWS, GCP, Azure) for More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls and presentations Actively seek out new opportunities More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
learning models in production environments. API Development: An understanding of REST. Experience with Flask or FastAPI. Data Validation: Knowledge of Pydantic for data validation. Scripting and Prototyping: Use of Jupyter Notebooks for quick prototyping. DevSecOps Practices: Understanding of secure coding and automated testing. Experience with Pytest or a Python testing framework. You'll be able to be yourself; we'll More ❯
Services (S3, EKS, ECR, EMR, etc.) •Experience with containers and orchestration (e.g. Docker, Kubernetes) •Experience with Big Data processing technologies (Spark, Hadoop, Flink etc) •Experience with interactive notebooks (e.g. JupyterHub, Databricks) •Experience with Git Ops style automation •Experience with ix (e.g, Linux, BSD, etc.) tooling and scripting •Participated in projects that are based on data science methodologies, and/or More ❯
AI/ML/Data Science apprenticeship programme. Core Skills & Competencies Technical Skills Programming proficiency in Python and common ML libraries such as TensorFlow, PyTorch, or similar. Experience with Jupyter Notebooks and version control (Git/GitHub). Basic understanding of supervised/unsupervised learning, neural networks, or clustering. Analytical Abilities Ability to interpret data trends, visualize outputs, and debug More ❯
Cambridge, Cambridgeshire, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
and model evaluation Requirements: 3+ years of experience in data science or ML, ideally in biotech or healthcare Strong Python programming skills and experience with ML libraries Familiarity with Jupyter , Pandas , NumPy , and MLFlow Experience working with clinical or biological datasets is a big plus Comfortable working in a fast-paced, research-driven environment Bonus Skills: Knowledge of genomics , bioinformatics More ❯
Cambridge, Cambridgeshire, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
and model evaluation Requirements: 3+ years of experience in data science or ML, ideally in biotech or healthcare Strong Python programming skills and experience with ML libraries Familiarity with Jupyter , Pandas , NumPy , and MLFlow Experience working with clinical or biological datasets is a big plus Comfortable working in a fast-paced, research-driven environment Bonus Skills: Knowledge of genomics , bioinformatics More ❯
and the Hadoop Ecosystem Edge technologies e.g. NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable for Data Devops Engineer: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology e.g. ActiveMQ NiFi Rego Familiarity with code development, shell-scripting in Python, Bash More ❯
and other Qualtrics products Acquire data from customers (usually sftp or cloud storage APIs) Validate data with exceptional detail orientation (including audio data) Perform data transformations (using Python and Jupyter Notebooks) Load the data via APIs or pre-built Discover connectors Advise our Sales Engineers and customers as needed on the data, integrations, architecture, best practices, etc. Build new AWS More ❯
data analysis. Strong technical skills regarding data analysis, statistics, and programming. Strong working knowledge of, Python, Hadoop, SQL, and/or R. Working knowledge of Python data tools (e.g. Jupyter, Pandas, Scikit-Learn, Matplotlib). Ability to talk the language of statistics, finance, and economics a plus. Profound knowledge of the English language. In a changing world, diversity and inclusion More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Singular Recruitment
include: 3+ years industry experience in a Data Science role and a strong academic background Python Data Science Stack: Advanced proficiency in Python , including pandas , NumPy , scikit-learn , and Jupyter Notebooks . Statistical & ML Modelling: Strong foundation in statistical analysis and proven experience applying a range of machine learning techniques to solve business problems (e.g., regression, classification, clustering, time-series More ❯
DevOps Methodologies: experience of working on Agile projects Good understanding of SOA/Microservices based architectures Good understanding of OOP, SOLID principles and software design patterns Knowledge of Python (Jupyter notebooks) Benefits offered Bonus, Pension (9% non-contributory plus additional matched contributions), 4 x Life Assurance, Group Income Protection, Season Ticket Loan, GAYE, BUPA Private Medical, Private GP, Travel Insurance More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Curo Resourcing Ltd
Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable: Jupyter Hub Awareness RabbitMQ or other common queue technology e.g. ActiveMQ NiFi Rego Familiarity with code development, shell-scripting in Python, Bash etc. To apply for this DV Cleared DevOps Engineer More ❯
Comfortable working with imperfect data, ambiguity, and evolving priorities. Bonus: experience with DBT, cloud data warehouses (e.g. BigQuery), or automated experimentation platforms. Technology Python (incl. pandas, statsmodels, scikit-learn), Jupyter dbt, SQL (BigQuery, PostgreSQL) Tableau or similar BI tools GitHub, GCP, Docker (optional but useful) How we expect you to work ️ Collaboration : We work in cross-functional, autonomous squads where More ❯
on our data so you will need to understand how to develop your own models • Strong programming skills and experience working with Python, Scikit-Learn, SciPy, NumPy, Pandas and Jupyter Notebooks is desirable. Experience with object-oriented programming is beneficial • Publications at top conferences, such as NeurIPS, ICML or ICLR, is highly desirable Why should you apply? • Highly competitive compensation More ❯
effectively and confidently Build great relationships with Data Science, Technology, Finance, Collections, Ops and other stakeholders What you'll need Excellent SQL skills Python data science stack (pandas, NumPy, Jupyter notebooks, Plotly/matplotlib, etc) A drive to solve problems using data Experience in a management role What would be a bonus: Familiarity with Git Data visualization tool (Tableau, Looker More ❯
and experience in GA4, Google Search Console, Google Tag Manager, Looker Studio, Google Cloud Console (Big Query), Google Apps Scripts Strong working knowledge of HTML, basic JavaScript, Python and Jupyter Notebooks as they relate to technical SEO analysis Proficiency in SEO audit tools such as SEMrush, Ahrefs, Screaming Frog, DeepCrawl, or similar Proficiency gathering marketing insights for analysis and reporting More ❯