a related field. Proven experience in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for driving measurable business impact. Knowledge More ❯
a related field. Proven experience in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for driving measurable business impact. Knowledge More ❯
areas: Big Data Analytics (e.g. Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures Modelling & Statistical Analysis experience, ideally customer related A university degree - numbers based, Computer Science or Geography Relevant industry sector knowledge ideal but not essential A More ❯
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
management readiness. Proven track record to take research from concept to business impact. Strong Python toolkit proficiency for Data Science, with experience in SQL and NoSQL databases. Familiarity with Jupyter Notebooks and Git version control. Expertise in working with large, sophisticated datasets and extracting actionable insights. Project management experience with tight deadlines. Ability to work independently and take ownership of More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls, and presentations Actively seek out new opportunities More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls and presentations Actively seek out new opportunities More ❯
testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud data warehouses (e.g., Snowflake, Databricks, Redshift, BigQuery) Familiarity with data pipelines and orchestration tools like More ❯
testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud data warehouses (e.g., Snowflake, Databricks, Redshift, BigQuery) Familiarity with data pipelines and orchestration tools like More ❯
and interpreting large datasets. Ability to apply statistical techniques to validate models and algorithms. Data Manipulation & Analysis: Proficient in data manipulation and analysis using tools like Pandas, NumPy, and Jupyter Notebooks. Experience with data visualization tools such as Matplotlib, Seaborn, or Tableau to communicate insights effectively. Our offer: Cash: Depends on experience Equity : Generous equity package, on a standard vesting More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
improve our ability to serve clients. Tech Skills Required: Advanced level of coding in Python for Data Science Software engineering architecture design for application with integrated Data Science solutions Jupyter server/notebooks AWS: EC2, Sagemaker, S3 Git version control SQL skills include selecting, filtering, aggregating, and joining data using core clauses, use of CTEs, window functions, subqueries, and data More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Actively seek out new opportunities to learn and develop Be an example of data science best-practice e.g. Git/Docker/cloud deployment Write proposals for exciting new More ❯
healthcare/life science organization is considered an asset. Other skills we are searching for are: Programming Skills: Proficiency in Python, data analytics, deep learning (Scikit-learn, Pandas, PyTorch, Jupyter, pipelines), and practical knowledge of data tools like Databricks, Ray, Vector Databases, Kubernetes, and workflow scheduling tools such as Apache Airflow, Dagster, and Astronomer. GPU Computing: Familiarity with GPU computing More ❯
preprocessing, language modeling, and semantic similarity. Strong proficiency in Python, including use of ML libraries such as TensorFlow, PyTorch, or similar. Experience with data science tools and platforms (e.g., Jupyter, Pandas, NumPy, MLFlow). Familiarity with cloud-based AI tools and infrastructure, especially within the AWS ecosystem. Strong understanding of data structures, algorithms, and statistical analysis. Experience working with ETL More ❯
preprocessing, language modeling, and semantic similarity. Strong proficiency in Python, including use of ML libraries such as TensorFlow, PyTorch, or similar. Experience with data science tools and platforms (e.g., Jupyter, Pandas, NumPy, MLFlow). Familiarity with cloud-based AI tools and infrastructure, especially within the AWS ecosystem. Strong understanding of data structures, algorithms, and statistical analysis. Experience working with ETL More ❯
Learning, AI, Statistics, Economics or equivalent) 5+ years of professional working experience Someone who thrives in the incremental delivery of high quality production systems Proficiency in Java, Python, SQL, Jupyter Notebook Experience with Machine Learning and statistical inference. Understanding of ETL processes and data pipelines and ability to work closely with Machine Learning Engineers for product implementation Ability to communicate More ❯
clustering, classification, predictive modelling) through coursework, internships, or independent projects You are proficient in Python (especially pandas, numpy, scikit-learn, or similar libraries) and comfortable performing data analysis using Jupyter notebooks or similar tools You are comfortable writing clear, efficient SQL for extracting, cleaning, and preparing datasets, demonstrated through coursework, internships, or personal analytical projects You have demonstrated initiative by More ❯
real business impact. What We're Looking For A strong foundation in data structures, algorithms, data modelling, and software architecture. Solid hands- on experience in Python and its ecosystem ( Jupyter, Pandas, Scikit- learn, Matplotlib), and comfort working with SQL for data analysis. Experience with LangChain is a plus. Experience of delivering AI and ML- based products into production environments, with More ❯
culture of Millennium, judged by the ability to deliver timely solutions to portfolio and risk managers within the firm. Mandatory Requirements 3+ years Python development experience (pandas, numpy, polars, jupyter notebooks, FAST API) Experience with AWS services, such as: S3, EC2, AWS Batch and Redshift Proficiency in relational and non-relational database technologies BA or Master in computer science/ More ❯
Playwright or similar testing frameworks. REST APIs: Strong understanding of integrating and working with RESTful services. Data Skills: Experience in data wrangling/analysis (e.g., using SQL or Python, Jupyter Notebook). Collaboration: Experience working in an Agile environment (Scrum/Kanban). Problem-Solving: Strong analytical and troubleshooting skills. Desirable Skills Familiarity with state management libraries (MobX, Redux). More ❯
program with a strong emphasis on hypothesis-based hunting methodologies. Use threat intelligence, MITRE ATT&CK, and risk models to form hypotheses and validate them through structured hunts. Leverage Jupyter Notebooks and other tools to automate hunts, visualise results, and create reusable artifacts for future investigations and detections. Collaborate with detection engineering to convert threat hunt findings into high fidelity … investigate advanced threats beyond signature-based solutions. Adept at leveraging Splunk for data analysis and detection development, they bring strong scripting capabilities (e.g., Python, PowerShell, SQL) and experience using Jupyter Notebooks to automate hunts and visualise results. This individual has successfully built or significantly contributed to threat hunting programs, translating threat intelligence into actionable insights and working alongside detection engineers … security, engineering, and business teams. Strong use of Splunk Programming Language. Strong scripting/query language skills (e.g., Python, KQL, SQL, PowerShell). Desirable Requirements Hands-on experience using Jupyter Notebooks for data exploration, automation, and visualization in a security context. Knowledge of cloud products and log events such as Azure, Amazon Web Services, Google Cloud Platform. Experience building a More ❯
data analysis. Strong technical skills regarding data analysis, statistics, and programming. Strong working knowledge of, Python, Hadoop, SQL, and/or R. Working knowledge of Python data tools (e.g. Jupyter, Pandas, Scikit-Learn, Matplotlib). Ability to talk the language of statistics, finance, and economics a plus. Profound knowledge of the English language. In a changing world, diversity and inclusion More ❯