following database systems – DynamoDB, DocumentDB, MongoDB Demonstrated expertise in unit testing and tools – JUnit, Mockito, PyTest, Selenium. Strong working knowledge of the PyData stack – pandas, NumPy for data manipulation; Jupyter Notebooks for experimentation; matplotlib/Seaborn for basic visualisation. Experience with data analysis and troubleshooting data-related issues. Knowledge of design patterns and software architectures Familiarity with CI/CD … following database systems – DynamoDB, DocumentDB, MongoDB Demonstrated expertise in unit testing and tools – JUnit, Mockito, PyTest, Selenium. Strong working knowledge of the PyData stack – pandas, NumPy for data manipulation; Jupyter Notebooks for experimentation; matplotlib/Seaborn for basic visualisation. Experience with data analysis and troubleshooting data-related issues. Knowledge of design patterns and software architectures Familiarity with CI/CD More ❯
vision, Generative AI, Language Models (e.g., GPT), Retrieval-Augmented Generation (RAG).**Technical Skills:*** Deep knowledge of Python (including libraries such as scikit-learn, Pandas, NumPy).* Experience with Jupyter, Spark/Scala or R, PostgreSQL, and ELK stack.* Proficiency in Java, Kubernetes, Docker, and microservices-oriented architecture.* Familiarity with MLOps practices and collaborative tools (e.g., GitLab).**Language Skills More ❯
one or more relevant database technologies e.g. MongoDB, PostgreSQL, Snowflake, Oracle Proficient with a range of open source frameworks and development tools e.g. NumPy/SciPy/Pandas, Spark, Jupyter Advantageous Prior experience of working with financial market data or alternative data Relevant mathematical knowledge e.g. statistics, time-series analysis Experience in data visualisation and building web apps in modern More ❯
a related field. Proven experience in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for driving measurable business impact. Knowledge More ❯
a related field. Proven experience in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for driving measurable business impact. Knowledge More ❯
a related field. Proven experience in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for driving measurable business impact. Knowledge More ❯
areas: Big Data Analytics (e.g. Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures Modelling & Statistical Analysis experience, ideally customer related A university degree - numbers based, Computer Science or Geography Relevant industry sector knowledge ideal but not essential A More ❯
in building machine learning models for tasks like recommendations, segmentation, forecasting, and optimising marketing spend. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, PyTorch, and more. Experience with A/B testing and other experimentation methods to validate model performance and business impact. Experience with cloud platforms (AWS, Databricks, Snowflake), containerisation More ❯
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
management readiness. Proven track record to take research from concept to business impact. Strong Python toolkit proficiency for Data Science, with experience in SQL and NoSQL databases. Familiarity with Jupyter Notebooks and Git version control. Expertise in working with large, sophisticated datasets and extracting actionable insights. Project management experience with tight deadlines. Ability to work independently and take ownership of More ❯
and PyTorch. Exposure to LLMs from model families such as Anthropic, Meta, Amazon, and OpenAI. Familiarity with tools and packages like Pandas, NumPy, scikit-learn, Plotly/Matplotlib, and Jupyter Notebooks. Knowledge of ML-adjacent technologies, including AWS SageMaker and Apache Airflow. Proficiency in data pre-processing, data wrangling, and augmentation techniques. Experience with cloud platforms (e.g. AWS, Google Cloud More ❯
Guildford, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
management readiness. Proven track record to take research from concept to business impact. Strong Python toolkit proficiency for Data Science, with experience in SQL and NoSQL databases. Familiarity with Jupyter Notebooks and Git version control. Expertise in working with large, sophisticated datasets and extracting actionable insights. Project management experience with tight deadlines. Ability to work independently and take ownership of More ❯
Glasgow, Scotland, United Kingdom Hybrid / WFH Options
JR United Kingdom
management readiness. Proven track record to take research from concept to business impact. Strong Python toolkit proficiency for Data Science, with experience in SQL and NoSQL databases. Familiarity with Jupyter Notebooks and Git version control. Expertise in working with large, sophisticated datasets and extracting actionable insights. Project management experience with tight deadlines. Ability to work independently and take ownership of More ❯
with 10 years, bachelor's with 8 years, master's with 6 years, or PhD with 4 years Proficiency in data science languages and tools (e.g., Python, R, SQL, Jupyter, Pandas, Scikit-learn) Experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and big data platforms (e.g., Spark, Hadoop) Strong background in statistics, data modeling, and algorithm development Ability to explain More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls, and presentations Actively seek out new opportunities More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls and presentations Actively seek out new opportunities More ❯
London, England, United Kingdom Hybrid / WFH Options
Compare the Market
design and deployment. Strong software engineering skills, including version control (Git), code reviews, and unit testing. Familiarity with common data science libraries and tools (e.g., NumPy, Pandas, Scikit-learn, Jupyter). Experience in setting up and managing continuous integration and continuous deployment pipelines. Proficiency with containerization technologies (e.g., Docker, Kubernetes). Experience with cloud services (e.g., AWS, GCP, Azure) for More ❯
visualization required - Strong understanding of AI/ML algorithms and techniques - Proficiency in scripting languages such as Python, Scala, SQL - Experience working within coding environments such as Databricks and Jupyter - Experience developing and deploying software solutions within cloud such as AWS or Azure - Ability to work collaboratively with stakeholders and in a team environment - Excellent communication and documentation skills. - Tableau More ❯
testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud data warehouses (e.g., Snowflake, Databricks, Redshift, BigQuery) Familiarity with data pipelines and orchestration tools like More ❯
testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud data warehouses (e.g., Snowflake, Databricks, Redshift, BigQuery) Familiarity with data pipelines and orchestration tools like More ❯
London, England, United Kingdom Hybrid / WFH Options
Compare the Market
storage and retrieval. Strong software engineering skills, including version control (Git), code reviews, and unit testing. Familiarity with common data science libraries and tools (e.g., NumPy, Pandas, Scikit-learn, Jupyter). Experience in setting up and managing continuous integration and continuous deployment pipelines. Proficiency with containerization technologies (e.g., Docker, Kubernetes). Experience with cloud services (e.g., AWS, GCP, Azure) for More ❯
analysis, artificial intelligence, or software engineering with data analysis software (R, Python, SAS, MATLAB). Experience with Windows server management and Power BI Report Service. Proficiency with Python/Jupyter Notebooks, SQL and relational databases, ElasticSearch/Kibana. Background working with analyst teams including conducting data analysis and creating data visualizations. Experience with graph analytics, data pipelines, and IP-based More ❯
and interpreting large datasets. Ability to apply statistical techniques to validate models and algorithms. Data Manipulation & Analysis: Proficient in data manipulation and analysis using tools like Pandas, NumPy, and Jupyter Notebooks. Experience with data visualization tools such as Matplotlib, Seaborn, or Tableau to communicate insights effectively. Our offer: Cash: Depends on experience Equity : Generous equity package, on a standard vesting More ❯
London, England, United Kingdom Hybrid / WFH Options
Sojern
and product managers. You can evaluate, analyze and interpret model results resulting in further improvement of existing statistical model performance You can perform complex data analysis using SQL/Jupyter notebook to find underlying issues and propose a solution to stakeholders explaining the various trade-offs associated with the solution. You can use your grit and initiative to fill in More ❯
H14A/9326, H15A/9327, H16A/9328, H33A, H34A) Experience or familiarity with data analytics and/or the following advanced scripting languages and tools: Python, SQL Jupyter Pig ELK Stack Splunk PowerBI Jupyter Notebooks _ Compensation ranges encompass a total compensation package and are a general guideline only and not intended as a guaranteed and/or implied More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯