design and deployment. Strong software engineering skills, including version control (Git), code reviews, and unit testing. Familiarity with common data science libraries and tools (e.g., NumPy, Pandas, Scikit-learn, Jupyter). Experience in setting up and managing continuous integration and continuous deployment pipelines. Proficiency with containerization technologies (e.g., Docker, Kubernetes). Experience with cloud services (e.g., AWS, GCP, Azure) for More ❯
and product managers. You can evaluate, analyze and interpret model results resulting in further improvement of existing statistical model performance You can perform complex data analysis using SQL/Jupyter notebook to find underlying issues and propose a solution to stakeholders explaining the various trade-offs associated with the solution. You can use your grit and initiative to fill in More ❯
Services (S3, EKS, ECR, EMR, etc.) •Experience with containers and orchestration (e.g. Docker, Kubernetes) •Experience with Big Data processing technologies (Spark, Hadoop, Flink etc) •Experience with interactive notebooks (e.g. JupyterHub, Databricks) •Experience with Git Ops style automation •Experience with ix (e.g, Linux, BSD, etc.) tooling and scripting •Participated in projects that are based on data science methodologies, and/or More ❯
AI/ML/Data Science apprenticeship programme. Core Skills & Competencies Technical Skills Programming proficiency in Python and common ML libraries such as TensorFlow, PyTorch, or similar. Experience with Jupyter Notebooks and version control (Git/GitHub). Basic understanding of supervised/unsupervised learning, neural networks, or clustering. Analytical Abilities Ability to interpret data trends, visualize outputs, and debug More ❯
understanding of strengths and weaknesses of Generative LLM's Fundamental knowledge of ML, and basic knowledge of AI, NLP, and Large Language Models (LLM) Comfortable working with Python and Jupyter Notebooks Should have in-depth knowledge and familiarity with cloud platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Technical Skills Good to have: Expertise in More ❯
Playwright or similar testing frameworks. REST APIs: Strong understanding of integrating and working with RESTful services. Data Skills: Experience in data wrangling/analysis (e.g., using SQL or Python, Jupyter Notebook). Collaboration: Experience working in an Agile environment (Scrum/Kanban). Problem-Solving: Strong analytical and troubleshooting skills. Desirable Skills Familiarity with state management libraries (MobX, Redux). More ❯
Cambridge, Cambridgeshire, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
and model evaluation Requirements: 3+ years of experience in data science or ML, ideally in biotech or healthcare Strong Python programming skills and experience with ML libraries Familiarity with Jupyter , Pandas , NumPy , and MLFlow Experience working with clinical or biological datasets is a big plus Comfortable working in a fast-paced, research-driven environment Bonus Skills: Knowledge of genomics , bioinformatics More ❯
and other Qualtrics products Acquire data from customers (usually sftp or cloud storage APIs) Validate data with exceptional detail orientation (including audio data) Perform data transformations (using Python and Jupyter Notebooks) Load the data via APIs or pre-built Discover connectors Advise our Sales Engineers and customers as needed on the data, integrations, architecture, best practices, etc. Build new AWS More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Singular Recruitment
include: 3+ years industry experience in a Data Science role and a strong academic background Python Data Science Stack: Advanced proficiency in Python , including pandas , NumPy , scikit-learn , and Jupyter Notebooks . Statistical & ML Modelling: Strong foundation in statistical analysis and proven experience applying a range of machine learning techniques to solve business problems (e.g., regression, classification, clustering, time-series More ❯
DevOps Methodologies: experience of working on Agile projects Good understanding of SOA/Microservices based architectures Good understanding of OOP, SOLID principles and software design patterns Knowledge of Python (Jupyter notebooks) Benefits offered Bonus, Pension (9% non-contributory plus additional matched contributions), 4 x Life Assurance, Group Income Protection, Season Ticket Loan, GAYE, BUPA Private Medical, Private GP, Travel Insurance More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Curo Resourcing Ltd
Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable: Jupyter Hub Awareness RabbitMQ or other common queue technology e.g. ActiveMQ NiFi Rego Familiarity with code development, shell-scripting in Python, Bash etc. To apply for this DV Cleared DevOps Engineer More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Kerv Digital for Digital Transformation
certifications • Azure Synapse: Synapse-link, on-demand SQL engine, dedicated SQL pool • Writing unit tests, git Version control • Awareness of reliability patterns in ETL pipelines • Use of python in Jupyter notebooks for data processing • Azure storage technologies and cost/performance characteristics • Power BI, DAX, data flows • Techniques and tools for sanitizing data prior to use • Awareness of Kimball modelling More ❯
Longbridge, City and Borough of Birmingham, West Midlands (County), United Kingdom Hybrid / WFH Options
Kerv Digital
certifications Azure Synapse: Synapse-link, on-demand SQL engine, dedicated SQL pool Writing unit tests, git Version control Awareness of reliability patterns in ETL pipelines Use of python in Jupyter notebooks for data processing Azure storage technologies and cost/performance characteristics Power BI, DAX, data flows Techniques and tools for sanitizing data prior to use Awareness of Kimball modelling More ❯
written communication skills, with the ability to explain data findings to both technical and non-technical audiences. Experience delivering data driven insights to businesses. Familiarity with tools such as Jupyter Notebook and basic Python for data analysis. Some exposure to cloud platforms (e.g., AWS, GCP, or Azure) and interest in learning cloud-based data tools. Experience in using BI tools More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
technology across the business. Machine Learning Engineer, key skills: Significant experience working as a Data Scientist/Machine Learning Engineer Solid knowledge of SQLandPython's ecosystem for data analysis (Jupyter, Pandas, Scikit Learn, Matplotlib). GCP, VertexAI experience is desirable (developing GCP machine learning services) Timeseries forecasting Solid understanding of computer science fundamentals, including data structures, algorithms, data modelling and More ❯
UX development Communicate clearly and manage blockers proactively Your Profile: 1+ year professional or internship engineering experience Solid foundation in software design patterns and data structures Familiar with Git, Jupyter, command line, and agile workflows Experience with: React.js Node Python CSS Typescript Unit Testing AI/ML: LangChain, PyTorch, TensorFlow (basic understanding) Bonus: Interest in ethical AI, UX design, and More ❯