areas: Big Data Analytics (e.g. Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures Modelling & Statistical Analysis experience, ideally customer related A university degree - numbers based, Computer Science or Geography Relevant industry sector knowledge ideal but not essential A More ❯
London, England, United Kingdom Hybrid / WFH Options
Sojern
and product managers. You can evaluate, analyze and interpret model results resulting in further improvement of existing statistical model performance You can perform complex data analysis using SQL/Jupyter notebook to find underlying issues and propose a solution to stakeholders explaining the various trade-offs associated with the solution. You can use your grit and initiative to fill in More ❯
a related field. Proven experience in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for driving measurable business impact. Knowledge More ❯
a related field. Proven experience in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for driving measurable business impact. Knowledge More ❯
a related field. Proven experience in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for driving measurable business impact. Knowledge More ❯
in building machine learning models for tasks like recommendations, segmentation, forecasting, and optimising marketing spend. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, PyTorch, and more. Experience with A/B testing and other experimentation methods to validate model performance and business impact. Experience with cloud platforms (AWS, Databricks, Snowflake), containerisation More ❯
in building machine learning models for tasks like recommendations, segmentation, forecasting, and optimising marketing spend. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, PyTorch, and more. Experience with A/B testing and other experimentation methods to validate model performance and business impact. Experience with cloud platforms (AWS, Databricks, Snowflake), containerisation More ❯
London, England, United Kingdom Hybrid / WFH Options
Trudenty
and interpreting large datasets. Ability to apply statistical techniques to validate models and algorithms. Data Manipulation & Analysis: Proficient in data manipulation and analysis using tools like Pandas, NumPy, and Jupyter Notebooks. Experience with data visualization tools such as Matplotlib, Seaborn, or Tableau to communicate insights effectively. Our offer: Cash: Depends on experience Equity: Generous equity package, on a standard vesting More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls, and presentations Actively seek out new opportunities More ❯
improve our ability to serve clients. Tech Skills Required: Advanced level of coding in Python for Data Science Software engineering architecture design for application with integrated Data Science solutions Jupyter server/notebooks AWS: EC2, Sagemaker, S3 Git version control SQL skills include selecting, filtering, aggregating, and joining data using core clauses, use of CTEs, window functions, subqueries, and data More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls and presentations Actively seek out new opportunities More ❯
London, England, United Kingdom Hybrid / WFH Options
Compare the Market
design and deployment. Strong software engineering skills, including version control (Git), code reviews, and unit testing. Familiarity with common data science libraries and tools (e.g., NumPy, Pandas, Scikit-learn, Jupyter). Experience in setting up and managing continuous integration and continuous deployment pipelines. Proficiency with containerization technologies (e.g., Docker, Kubernetes). Experience with cloud services (e.g., AWS, GCP, Azure) for More ❯
testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud data warehouses (e.g., Snowflake, Databricks, Redshift, BigQuery) Familiarity with data pipelines and orchestration tools like More ❯
London, England, United Kingdom Hybrid / WFH Options
Compare the Market
storage and retrieval. Strong software engineering skills, including version control (Git), code reviews, and unit testing. Familiarity with common data science libraries and tools (e.g., NumPy, Pandas, Scikit-learn, Jupyter). Experience in setting up and managing continuous integration and continuous deployment pipelines. Proficiency with containerization technologies (e.g., Docker, Kubernetes). Experience with cloud services (e.g., AWS, GCP, Azure) for More ❯
/ActiveMQ/RabbitMQ or NIFI. Service Now Ecosystem AWS/GCP, Terraform etc. automated provisioning tool chains. OCI Container and Execution Frameworks (Docker, Podman) K8S (OpenShift, Rancher etc.) Jupyter Lab for analytics, including standard libraries like NumPy/panda/matplotlib) Knowledge of Intermediate (No) SQL Skills Trading System Application Configuration/Deployment/Management Experience Systems Familiarity with More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
a related field. 🧠 Solid understanding of data analysis, machine learning concepts, and statistical methods. 🐍 Proficiency in Python (e.g., Pandas, Scikit-learn, NumPy) or R, with exposure to tools like Jupyter, SQL, or cloud platforms (e.g., AWS, GCP). 📊 Experience working with data—through academic projects, internships, or personal work—and a curiosity to learn more. 🗣️ Strong communication skills to share More ❯
a related field. 🧠 Solid understanding of data analysis, machine learning concepts, and statistical methods. 🐍 Proficiency in Python (e.g., Pandas, Scikit-learn, NumPy) or R, with exposure to tools like Jupyter, SQL, or cloud platforms (e.g., AWS, GCP). 📊 Experience working with data—through academic projects, internships, or personal work—and a curiosity to learn more. 🗣️ Strong communication skills to share More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Actively seek out new opportunities to learn and develop Be an example of data science best-practice e.g. Git/Docker/cloud deployment Write proposals for exciting new More ❯
databases. • Proficiency in data visualization tools such as Power BI, Matplotlib, Seaborn, or Plotly. • Experience with cloud platforms, especially AWS (e.g., S3, EC2, SageMaker). • Familiarity with tools like Jupyter, Snowflake, Docker, and Atlassian suite (Bitbucket, JIRA, Confluence). Soft Skills • Strong analytical and problem-solving abilities. • Excellent communication and teamwork skills. • Natural curiosity and a passion for uncovering insights More ❯
with cloud-based ML services: AWS SageMaker, Azure ML, GCP Vertex AI, etc. Understanding of deployment pipelines, serverless components (e.g., Lambda, Step Functions). Data Science Collaboration: Exposure to Jupyter Notebooks , visualization libraries (e.g., Matplotlib, Seaborn). Knowledge of synthetic data generation, data augmentation, and perturbation techniques. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, Software More ❯
with cloud-based ML services: AWS SageMaker, Azure ML, GCP Vertex AI, etc. Understanding of deployment pipelines, serverless components (e.g., Lambda, Step Functions). Data Science Collaboration: Exposure to Jupyter Notebooks , visualization libraries (e.g., Matplotlib, Seaborn). Knowledge of synthetic data generation, data augmentation, and perturbation techniques. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, Software More ❯