such as recommendation engines or automated lead scoring systems. They should also be able to perform statistical analysis. Requirements Python for DS (the usuals pandas plotting etc) Modelling skills for both ML applications and data reporting SQL at least basics but by year 3 should be quite proficient with at More ❯
using Python. Practical expertise and work experience with ML projects, both supervised and unsupervised. Proficient programming skills with Python, including libraries such as NumPy, pandas, and scikit-learn, as well as R. Understanding and usage of the OpenAI API. NLP: tokenization, embeddings, sentiment analysis, basic transformers for text-heavy datasets. More ❯
skills and a good understanding of software engineering principles and clean code practices. Expert-level knowledge of Python for machine learning and data manipulation (pandas, NumPy). Advanced experience with SQL for data querying and manipulation. Experience with Git, Bash, Docker, and machine learning pipelines. Experience with open-source machine More ❯
Python for API and model development, including frameworks like Sklearn, Pytorch, and TensorFlow. Understanding of machine learning techniques. Experience with data manipulation libraries (e.g., Pandas, Spark, SQL). Experience with version control (Git). Cloud experience (Azure, GCP, AWS). Additional desirable skills include: Modeling experience in industry-relevant use More ❯
or similar. Technical knowledge of relevant ML performance metrics and how to apply them to monitor models. Strong knowledge of Python (such as numpy, pandas, matplotlib, streamlit, and opencv). Strong knowledge of modern programming paradigms (OOP, functional programming etc). Ability to write clean, robust, readable, error handling and More ❯
CD, version control (git), testing frameworks, MLOps Comfortable working with Docker and containerised applications Experience with data science Python libraries such as Scikit-learn, Pandas, NumPy, Pytorch etc. Experience using AWS or similar cloud computing platform Great communicator - convey complex ideas and solutions in clear, precise and accessible ways Team More ❯
solving and solution scoping Strong grasp of mathematical, statistical concepts, and machine learning algorithms Proficiency in Python and data science libraries for example NumPy, Pandas, Scikit-learn, Keras SQL proficiency Experience with cloud environments for example Google Cloud Platform Version control management Ability to work efficiently without compromising quality Effective More ❯
and guide this work through others. Experienced in using Python and SQL to query and analyse large datasets, with expertise in libraries such as Pandas, NumPy, SciPy, Matplotlib, and Seaborn for data manipulation, statistical analysis, and visualisation. Familiarity with Monte Carlo simulations in Python and/or PyMC3 for Bayesian More ❯
and guide this work through others. Experienced in using Python and SQL to query and analyse large datasets, with expertise in libraries such as Pandas, NumPy, SciPy, Matplotlib, and Seaborn for data manipulation, statistical analysis, and visualisation. Familiarity with Monte Carlo simulations in Python and/or PyMC3 for Bayesian More ❯
convolution, attention). Proficiency in Python programming language Expertise in a programming language (Python preferred) Familiar with data processing and analysis tools such as Pandas, NumPy, Scikit-learn, etc. Knowledge of machine learning concepts such as supervised and unsupervised learning, classification, regression, clustering, dimensionality reduction, etc. Experience with applying machine More ❯
growth, marketing, or product-focused roles. Deep understanding of LTV modeling, forecasting, and experimental design. Proficiency in Python for data analysis and modeling (e.g., pandas, scikit-learn, statsmodels). Advanced SQL skills and experience working with large datasets in modern data environments. Experience working with cross-functional growth or marketing More ❯
West Bend, Wisconsin, United States Hybrid / WFH Options
Delta Defense
PyTorch, TensorFlow, scikit-learn, and deep understanding of model lifecycle management. Strong command of the tools for building machine learning models. Proficient in Python (Pandas, NumPy, scikit-learn, PyTorch) and advanced SQL. 2 to 3 years of experience with big data platforms: Snowflake, Databricks or Spark. Solid understanding of probability More ❯
and guide this work through others. Experienced in using Python and SQL to query and analyse large datasets, with expertise in libraries such as Pandas, NumPy, SciPy, Matplotlib, and Seaborn for data manipulation, statistical analysis, and visualisation. Familiarity with Monte Carlo simulations in Python and/or PyMC3 for Bayesian More ❯
actionable insights. Capability to manage projects end-to-end and produce good outcomes without much supervision. Proficiency with data manipulation and modelling tools - e.g., pandas, statsmodels, R. Experience with scientific computing and tooling - e.g., NumPy, SciPy, R, Matlab, Mathematica, BLAS. Degree in Statistics, Mathematics, Physics or equivalent. Bonus: Experience implementing More ❯
actionable insights. Capability to manage projects end-to-end and produce good outcomes without much supervision. Proficiency with data manipulation and modelling tools - e.g., pandas, statsmodels, R. Experience with scientific computing and tooling - e.g., NumPy, SciPy, R, Matlab, Mathematica, BLAS. Degree in Statistics, Mathematics, Physics or equivalent. Bonus: Experience implementing More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Bupa
Adobe Substantial experience with data warehouse platforms like Snowflake Substantial experience with languages like SQL, Python Substantial experience with python libraries such as NumPy, Pandas, SciPy, scikit-learn Applied knowledge of machine learning/statistical modelling techniques Experience in using a tag management system like TealiumIQ Knowledge of software development More ❯
environments and HPC scheduling software. Software development including version control using GitWith open-source tools and software. Python and data analysis modules such as Pandas, NumPy, and Dask. Developing software in C/C++, Fortran or other programming languages. DESIRED QUALIFICATIONS In-depth understanding of HPC architectures and their optimization More ❯
skills in probability and statistics. Experience developing code collaboratively and implementing solutions in a production environment. Proficiency with data manipulation and modelling tools - e.g., pandas, statsmodels, R. Experience with scientific computing and tooling - e.g., NumPy, SciPy, Matlab, etc. Self-driven with the capability to efficiently manage projects end-to-end. More ❯
San Diego, California, United States Hybrid / WFH Options
G2 Ops, Inc
Building, Extract, Transform, and Load (ETL) pipelines, Web Application Servers, or Search Index. Experience using programming languages and products such as Python, Jupyter Notebook, Pandas, Numpy, Requests, or Antigravity. Experience applying complex mathematical and statistical concepts. Experience applying statistical and operations research methods and tools. Experience employing spreadsheets for data More ❯
execute trades in commodities markets (e.g., oil, gas). The PM is keen to get someone from a data engineering background. Tech: Python, SQL, Pandas, AWS, ETL, Airflow Please apply if this fits. More ❯
execute trades in commodities markets (e.g., oil, gas). The PM is keen to get someone from a data engineering background. Tech: Python, SQL, Pandas, AWS, ETL, Airflow Please apply if this fits. More ❯
execute trades in commodities markets (e.g., oil, gas). The PM is keen to get someone from a data engineering background. Tech: Python, SQL, Pandas, AWS, ETL, Airflow Please apply if this fits. More ❯
Top Skills: - Python - Machine Learning - JavaScript - Numpy and Pandas is a plus - This position requires an active DoD Clearance (Secret, Top Secret, Top Secret/SCI) or the ability to be obtain an (Interim Secret, Interim Top Secret) - Because an active or interim DoD clearance is required, U.S. Citizenship is More ❯
and requirements: Demonstrated experience as a Data Engineer scaling Data heavy platforms Strong understanding of ETL pipelines and Data Architectures Python, SQL & Python libraries (Pandas & Spark) Modern Data Warehousing tools AWS/GCP experience Experience working in high-growth start-up or scale-up environments (essential) Hybrid working in Central More ❯