to apply statistical techniques to validate models and algorithms. Data Manipulation & Analysis: Proficient in data manipulation and analysis using tools like Pandas, NumPy, and Jupyter Notebooks. Experience with data visualization tools such as Matplotlib, Seaborn, or Tableau to communicate insights effectively. Our offer: Cash: Depends on experience Equity: Generous equity More ❯
in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for More ❯
Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures A university degree - numbers based, Computer Science or Geography Relevant industry sector knowledge ideal but not essential A strong communicator More ❯
Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures A university degree - numbers based, Computer Science or Geography Relevant industry sector knowledge ideal but not essential A strong communicator More ❯
Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures. A university degree - numbers based, Computer Science or Geography. Relevant industry sector knowledge ideal but not essential. A strong communicator More ❯
engineering skills, including version control (Git), code reviews, and unit testing. Familiarity with common data science libraries and tools (e.g., NumPy, Pandas, Scikit-learn, Jupyter). Experience in setting up and managing continuous integration and continuous deployment pipelines. Proficiency with containerization technologies (e.g., Docker, Kubernetes). Experience with cloud services More ❯
learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing More ❯
learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing More ❯
series analysis, econometrics, and machine learning techniques Strong Python and SQL skills with experience in numerical computing and statistical analysis Experience using notebooks (e.g. Jupyter), Tableau or other data science tools for analysis and visualization A self-starter, with a track record of working independently and achieve targets in a More ❯
the ability to deliver timely solutions to portfolio and risk managers within the firm. Mandatory Requirements 3+ years Python development experience (pandas, numpy, polars, jupyter notebooks, FAST API) Experience with AWS services, such as: S3, EC2, AWS Batch and Redshift Proficiency in relational and non-relational database technologies BA or More ❯
of Generative LLM's • Fundamental knowledge of ML, and basic knowledge of AI, NLP, and Large Language Models (LLM) • Comfortable working with Python and Jupyter Notebooks • Should have in-depth knowledge and familiarity with cloud platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Technical Skills More ❯
of Generative LLM's • Fundamental knowledge of ML, and basic knowledge of AI, NLP, and Large Language Models (LLM) • Comfortable working with Python and Jupyter Notebooks • Should have in-depth knowledge and familiarity with cloud platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Technical Skills More ❯
NLP, Transfer Learning, etc.), and modern Deep Learning algorithms (e.g., BERT, LSTM, etc.) Solid knowledge of SQL and Python's ecosystem for data analysis (Jupyter, Pandas, Scikit Learn, Matplotlib, etc.) Understanding of model evaluation, data pre-processing techniques, such as standardisation, normalisation, and handling missing data Solid understanding of summary More ❯
Synapse Analytics, and large-scale data warehouses (Snowflake, Redshift, Presto). Proficiency in data visualization tools (Databricks, PowerBI) and the Python data science ecosystem (Jupyter, Pandas, Numpy, Matplotlib). Plusses: Financial services background Degree in cybersecurity Any advanced Data bricks qualifications Have lead teams of more than 10 people Recent More ❯
Synapse Analytics, and large-scale data warehouses (Snowflake, Redshift, Presto). Proficiency in data visualization tools (Databricks, PowerBI) and the Python data science ecosystem (Jupyter, Pandas, Numpy, Matplotlib). Plusses: Financial services background Degree in cybersecurity Any advanced Data bricks qualifications Have lead teams of more than 10 people Recent More ❯
for Natural Language Programming (NLP), Large Language Model (LLM), or Large Computer Vision projects. -Use SQL to query and analyze the data. -Use Python, Jupyter notebook, and Pytorch to train/test/deploy ML models. -Use machine learning and analytical techniques to create scalable solutions for business problems. -Research More ❯
to work on a hybrid basis as their offices in London. Essential Skills Python, Pytest Common python libraries such as Pandas/Numpy/Jupyter notebooks OpenTelemetry Git/Github Github actions Docker Microservices and/or lambdas Postgresql Streamlit Desirable skills include: The Fast API ecosystem (Pydantic, SQLAlchemy, Alembic More ❯
london, south east england, united kingdom Hybrid / WFH Options
Lorien
to work on a hybrid basis as their offices in London. Essential Skills Python, Pytest Common python libraries such as Pandas/Numpy/Jupyter notebooks OpenTelemetry Git/Github Github actions Docker Microservices and/or lambdas Postgresql Streamlit Desirable skills include: The Fast API ecosystem (Pydantic, SQLAlchemy, Alembic More ❯
Proficiency with relevant ML libraries and frameworks such as PyTorch, TensorFlow, scikit-learn, HuggingFace or similar. Experience with modern ML tooling, such as MLflow, Jupyter, feature stores, and vector databases. Understanding of software engineering best practices including version control, testing, CI/CD, containerisation, and observability. Familiarity with MLOps principles More ❯
will have insurance P&C experience. You will also be proficient in various coding languages (e.g. R, Python) and development environments (e.g. R Studio, Jupyter, VS Code). Alongside this, you will be experienced in data visualization and communication around this to present insights to a non-technical audience. In More ❯
Bitbucket). Experience with on-premise deployments of repository managers (e.g., Artifactory, JFrog, Nexus). Experience with on-premise deployments of developer platforms (e.g., JupyterHub, GitPod). Experience with advanced software engineering concepts and API development. Experience with build and release systems, including publication, replication, distribution, and lifecycle management of More ❯
proficiency in deploying cloud-based machine learning pipelines, particularly on Google Cloud Platform (GCP) and Vertex AI. Extensive experience in Python programming and using Jupyter notebooks. Deep expertise in deep learning frameworks such as Keras, PyTorch, and CoreML. Strong knowledge of image processing techniques and algorithms. Experience in data wrangling More ❯
research setting, with experience identifying biomarkers or therapeutic targets. Familiarity with best practices in version control, reproducible research, and collaborative development environments (e.g., Git, Jupyter, notebooks, code review). Northreach is an equal opportunity employer and we do not discriminate against any employee or applicant for employment based on race More ❯
research setting, with experience identifying biomarkers or therapeutic targets. Familiarity with best practices in version control, reproducible research, and collaborative development environments (e.g., Git, Jupyter, notebooks, code review). Northreach is an equal opportunity employer and we do not discriminate against any employee or applicant for employment based on race More ❯