Proficiency in working with large datasets, data wrangling, and data preprocessing. Experience in data science, statistical modelling, and data analytics techniques. Experience with data analysis and visualization tools (e.g., Matplotlib, Seaborn, Tableau). Ability to work independently and lead projects from inception to deployment. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure) is More ❯
Proficiency in working with large datasets, data wrangling, and data preprocessing. Experience in data science, statistical modelling, and data analytics techniques. Experience with data analysis and visualization tools (e.g., Matplotlib, Seaborn, Tableau). Ability to work independently and lead projects from inception to deployment. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure) is More ❯
frameworks (TensorFlow, PyTorch, scikit-learn) - Skilled in handling large datasets, data wrangling, and statistical modelling - Comfortable working independently on end-to-end ML pipelines - Experienced in visualisation tools (e.g. Matplotlib, Seaborn, Tableau) Desirable: - Exposure to cloud platforms (AWS, GCP, Azure) and big data tools (Hadoop, Spark) - MSc/PhD in Computer Science, AI, Data Science, or related field More ❯
and predictive modeling. Experience with NLP techniques for text analysis, classification, and information extraction. Knowledge of deep learning frameworks such as PyTorch or TensorFlow. Experience with data visualization tools (Matplotlib, Seaborn, Plotly, or similar). Strong analytical mindset with a focus on solving real-world problems. Excellent communication skills to present findings to technical and non-technical stakeholders. Fluent in More ❯
experience using data science libraries (e.g., Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch). Strong statistical, mathematical, and data modelling skills. Experience with data visualisation tools (e.g., Power BI, Tableau, matplotlib, Plotly). Familiarity with big data tools and cloud platforms (e.g., Hadoop, Spark, Azure, AWS). Ability to work with sensitive or classified data in secure environments. Desirable: Experience in More ❯
London, England, United Kingdom Hybrid / WFH Options
Trudenty
to validate models and algorithms. Data Manipulation & Analysis: Proficient in data manipulation and analysis using tools like Pandas, NumPy, and Jupyter Notebooks. Experience with data visualization tools such as Matplotlib, Seaborn, or Tableau to communicate insights effectively. Our offer: Cash: Depends on experience Equity: Generous equity package, on a standard vesting schedule Impact & Exposure: Work at the leading edge of More ❯
to validate models and algorithms. Data Manipulation & Analysis: Proficient in data manipulation and analysis using tools like Pandas, NumPy, and Jupyter Notebooks. Experience with data visualization tools such as Matplotlib, Seaborn, or Tableau to communicate insights effectively. Our offer: Cash: Depends on experience Equity: Generous equity package, on a standard vesting schedule Impact & Exposure: Work at the leading edge of More ❯
scikit-learn. Skilled in data wrangling, preprocessing, and managing large-scale datasets. Solid understanding of statistical modelling, predictive analytics, and time series analysis. Experience with data visualisation tools like Matplotlib, Seaborn, or Tableau. Salary: £525 per day #J-18808-Ljbffr More ❯
Clustering Dimensionality reduction LLM Proficiency in Python for developing machine learning models and conducting statistical analyses Strong understanding of data visualization tools and techniques (e.g., Python libraries such as Matplotlib, Seaborn, Plotly, etc.) and the ability to present data effectively Specific technical requirements: Data Science or AI/ML strategy Data Science or AI/ML solution architecture Proficiency in More ❯
DocumentDB, MongoDB Demonstrated expertise in unit testing and tools – JUnit, Mockito, PyTest, Selenium. Strong working knowledge of the PyData stack – pandas, NumPy for data manipulation; Jupyter Notebooks for experimentation; matplotlib/Seaborn for basic visualisation. Experience with data analysis and troubleshooting data-related issues. Knowledge of design patterns and software architectures Familiarity with CI/CD and automation tools. Experience … DocumentDB, MongoDB Demonstrated expertise in unit testing and tools – JUnit, Mockito, PyTest, Selenium. Strong working knowledge of the PyData stack – pandas, NumPy for data manipulation; Jupyter Notebooks for experimentation; matplotlib/Seaborn for basic visualisation. Experience with data analysis and troubleshooting data-related issues. Knowledge of design patterns and software architectures Familiarity with CI/CD and automation tools. Experience More ❯
Python programming skills, including experience with relevant analytical and machine learning libraries (e.g., pandas, polars, numpy, sklearn, TensorFlow/Keras, PyTorch, etc.), in addition to visualisation and API libraries (matplotlib, plotly, streamlit, Flask, etc). Understanding of Gen AI models, Vector databases, Agents, and follow the market trends. Its desirable to have a hands-on experience on these. Substantial experience More ❯
Python programming skills, including experience with relevant analytical and machine learning libraries (e.g., pandas, polars, numpy, sklearn, TensorFlow/Keras, PyTorch, etc.), in addition to visualisation and API libraries (matplotlib, plotly, streamlit, Flask, etc). Understanding of Gen AI models, Vector databases, Agents, and follow the market trends. Its desirable to have a hands-on experience on these. Substantial experience More ❯
production-grade code in Python. Solid understanding and proficiency in working with Pandas, NumPy, Scikit-Learn, TensorFlow, and PyTorch, SQL. Proficiency in advanced data visualization tools and libraries (e.g., Matplotlib, Seaborn, Plotly, Tableau) for creating insightful and interactive visualizations. Solid understanding of data structures, data modelling, and software architecture. Experience with cloud services (such as AWS) and tools for machine More ❯
production-grade code in Python. Solid understanding and proficiency in working with Pandas, NumPy, Scikit-Learn, TensorFlow, and PyTorch, SQL. Proficiency in advanced data visualization tools and libraries (e.g., Matplotlib, Seaborn, Plotly, Tableau) for creating insightful and interactive visualizations. Solid understanding of data structures, data modelling, and software architecture. Experience with cloud services (such as AWS) and tools for machine More ❯
and Scikit-learn. • Experience with machine learning frameworks like TensorFlow or PyTorch. • Strong SQL skills and experience with relational databases. • Proficiency in data visualization tools such as Power BI, Matplotlib, Seaborn, or Plotly. • Experience with cloud platforms, especially AWS (e.g., S3, EC2, SageMaker). • Familiarity with tools like Jupyter, Snowflake, Docker, and Atlassian suite (Bitbucket, JIRA, Confluence). Soft Skills More ❯
computer science , Mathematics, Statistics, Business Administration or related field Advanced knowledge of SQL (joins, aggregations, CTEs and window functions) Good knowledge of Python, including popular Data Science packages (pandas, matplotlib, seaborn, numpy , sklearn ) Familiarity with what is happening under the hood of popular Machine Learning algorithms Strong problem-solving skills and attention to detail Strong communication and collaboration skills Ability More ❯
London, England, United Kingdom Hybrid / WFH Options
Made Tech Limited
probability, bayesian stats etc.) Good working knowledge of Python Implementing end-to-end ML pipelines Hand-on experience in Python data science ecosystem (e.g. numpy, scipy, pandas, scikit-learn, matplotlib etc.) Deep Learning frameworks (TensorFlow, PyTorch, MLX) Popular classification and regression techniques Unsupervised learning & matrix factorisation algorithms Natural Language Processing (NLP) and document processing Generative AI (open and closed source More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Intellect Group
s degree from a Russell Group university in Data Science, Computer Science, Mathematics, Physics, Engineering, or a related discipline Strong programming skills in Python (e.g. NumPy, pandas, scikit-learn, matplotlib); R is also welcome A solid understanding of core machine learning concepts, data wrangling, and model evaluation Proficiency with SQL and experience handling large datasets A passion for solving complex More ❯
s degree from a Russell Group university in Data Science, Computer Science, Mathematics, Physics, Engineering, or a related discipline Strong programming skills in Python (e.g. NumPy, pandas, scikit-learn, matplotlib); R is also welcome A solid understanding of core machine learning concepts, data wrangling, and model evaluation Proficiency with SQL and experience handling large datasets A passion for solving complex More ❯
Out in Science, Technology, Engineering, and Mathematics
knowledge of data science fundamentals (Machine Learning methods, Statistics). Fluent in common analytics tools (Python, Pandas, Numpy, ScikitLearn, SQL, etc.) Comfortable to use data visualization libraries (e.g. Seaborn, Matplotlib) Demonstrated initiative, judgment and discretion while handling sensitive information Preferred Qualifications: If you have the following characteristics, it would be a plus: PhD - in a STEM subject and experience in More ❯
learn, XGBoost, LightGBM, StatsModels PyCaret, Prophet, or custom implementations for time series A/B testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud More ❯
stakeholders. Tools/Frameworks : PyCaret, Prophet, or custom implementations for time series A/B testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud More ❯
stakeholders. Tools/Frameworks : PyCaret, Prophet, or custom implementations for time series A/B testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud More ❯
stakeholders. Tools/Frameworks : PyCaret, Prophet, or custom implementations for time series A/B testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud More ❯
stakeholders. Tools/Frameworks : PyCaret, Prophet, or custom implementations for time series A/B testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud More ❯