Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures A university degree - numbers based, Computer Science or Geography Relevant industry sector knowledge ideal but not essential A strong communicator More ❯
Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures. A university degree - numbers based, Computer Science or Geography. Relevant industry sector knowledge ideal but not essential. A strong communicator More ❯
using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls, and presentations More ❯
to apply statistical techniques to validate models and algorithms. Data Manipulation & Analysis: Proficient in data manipulation and analysis using tools like Pandas, NumPy, and Jupyter Notebooks. Experience with data visualization tools such as Matplotlib, Seaborn, or Tableau to communicate insights effectively. Our offer: Cash: Depends on experience Equity : Generous equity More ❯
to apply statistical techniques to validate models and algorithms. Data Manipulation & Analysis: Proficient in data manipulation and analysis using tools like Pandas, NumPy, and Jupyter Notebooks. Experience with data visualization tools such as Matplotlib, Seaborn, or Tableau to communicate insights effectively. Our offer: Cash: Depends on experience Equity: Generous equity More ❯
using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Actively seek out new opportunities to learn and develop Be an example of data science best-practice e.g. Git/Docker/cloud deployment More ❯
JS), and TypeScript (TS). Statistical Knowledge: Solid understanding of statistical concepts and methodologies. Data Manipulation & Analysis: Proficiency with tools like Pandas, NumPy, and Jupyter Notebooks. Experience with data visualization tools such as Matplotlib, Seaborn, or Tableau. More ❯
REST APIs: Strong understanding of integrating and working with RESTful services. Data Skills: Experience in data wrangling/analysis (e.g., using SQL or Python, Jupyter Notebook). Collaboration: Experience working in an Agile environment (Scrum/Kanban). Problem-Solving: Strong analytical and troubleshooting skills. Desirable Skills Familiarity with state More ❯
the ability to deliver timely solutions to portfolio and risk managers within the firm. Mandatory Requirements 3+ years Python development experience (pandas, numpy, polars, jupyter notebooks, FAST API) Experience with AWS services, such as: S3, EC2, AWS Batch and Redshift Proficiency in relational and non-relational database technologies BA or More ❯
job responsibilities Design, develop, and evaluate innovative models for Natural Language Programming (NLP), Large Language Models (LLM), or Large Computer Vision Models. Use Python, Jupyter Notebook, and PyTorch to develop scalable machine learning solutions for business problems. Research and implement novel machine learning and statistical approaches. Mentor interns. Collaborate with More ❯
NLP, Transfer Learning, etc.), and modern Deep Learning algorithms (e.g., BERT, LSTM, etc.) Solid knowledge of SQL and Python's ecosystem for data analysis (Jupyter, Pandas, Scikit Learn, Matplotlib, etc.) Understanding of model evaluation, data pre-processing techniques, such as standardisation, normalisation, and handling missing data Solid understanding of summary More ❯
Data Scientist role, or similar Commercial experience with Machine learning, Deep Learning or Advanced Analytics, and relevant tools Strong background in Python, Pandas/Jupyter and AWS or GCP Good communicator, able to work across multiple teams Desirable skills and technologies Commercial experience in an ecommerce or marketplace environment Experience More ❯
Data Scientist role, or similar Commercial experience with Machine learning, Deep Learning or Advanced Analytics, and relevant tools Strong background in Python, Pandas/Jupyter and AWS or GCP Good communicator, able to work across multiple teams Desirable skills and technologies Commercial experience in an ecommerce or marketplace environment Experience More ❯
and AI/ML infrastructure. Things we're looking for: Proficiency in data analysis, insights generation and using cloud-hosted tools (e.g., BigQuery, Metabase, Jupyter). Strong Python and SQL skills, with experience in data abstractions, pipeline management and integrating machine learning solutions. Adaptability to evolving priorities and a proactive More ❯
stakeholders. What you'll need Excellent SQL skills. A drive to solve problems using data. Proficiency with the Python data science stack (pandas, NumPy, Jupyter notebooks, Plotly/matplotlib, etc.). Bonus skills include: Familiarity with Git. Experience with data visualization tools (Tableau, Looker, PowerBI, or equivalent). Knowledge of More ❯
NLP, Transfer Learning etc) and modern Deep Learning algorithms (e.g. BERT, LSTM, etc) Solid knowledge of SQL and Python's ecosystem for data analysis (Jupyter, Pandas, Scikit Learn, Matplotlib, etc) Understanding of model evaluation, data pre-processing techniques, such as standardisation, normalisation, and handling missing data Solid understanding of summary More ❯
for Natural Language Programming (NLP), Large Language Model (LLM), or Large Computer Vision Models. Use SQL to query and analyze the data. Use Python, Jupyter notebook, and Pytorch to train/test/deploy ML models. Use machine learning and analytical techniques to create scalable solutions for business problems. Research More ❯
Proficiency with relevant ML libraries and frameworks such as PyTorch, TensorFlow, scikit-learn, HuggingFace or similar Experience with modern ML tooling, such as MLflow, Jupyter, feature stores, and vector databases. Understanding of software engineering best practices including version control, testing, CI/CD, containerisation, and observability. Familiarity with MLOps principles More ❯
of dbt for data transformation. Experience with modern cloud data stacks - Snowflake, BigQuery, Redshift, etc. Comfortable working in agile environments with tools like Git, Jupyter, Airflow. Domain & Business Experience 5+ years in applied data science roles. Proven success in delivering business value from data in one or more of the More ❯
will have insurance P&C experience. You will also be proficient in various coding languages (e.g. R, Python) and development environments (e.g. R Studio, Jupyter, VS Code). Alongside this, you will be experienced in data visualization and communication around this to present insights to a non-technical audience. In More ❯
Bitbucket). Experience with on-premise deployments of repository managers (e.g., Artifactory, JFrog, Nexus). Experience with on-premise deployments of developer platforms (e.g., JupyterHub, GitPod). Experience with advanced software engineering concepts and API development. Experience with build and release systems, including publication, replication, distribution, and lifecycle management of More ❯
skills Experience in support roles, incident triage, and handling (SLAs) Linux system administration basics, Bash scripting, environment variables Experience with browser-based IDEs like Jupyter Notebooks Familiarity with Agile methodologies (SAFE, Scrum, JIRA) Languages and Frameworks: JSON YAML Python (advanced proficiency, Pydantic bonus) SQL PySpark Delta Lake Bash Git Markdown More ❯