related field. Deep understanding of option pricing theory (Black-Scholes, local/stochastic volatility, Monte Carlo). Expert Python developer with strong numerical and vectorized coding skills (NumPy, SciPy, Pandas). Experience building and calibrating volatility surfaces and handling risk measures (Greeks, VaR, sensitivities). Strong background in stochastic calculus, numerical methods, and optimization. More ❯
related field. Deep understanding of option pricing theory (Black-Scholes, local/stochastic volatility, Monte Carlo). Expert Python developer with strong numerical and vectorized coding skills (NumPy, SciPy, Pandas). Experience building and calibrating volatility surfaces and handling risk measures (Greeks, VaR, sensitivities). Strong background in stochastic calculus, numerical methods, and optimization. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
continuous improvement initiatives Essential Skills for the AWS Data Engineer: Extensive hands-on experience with AWS data services Strong programming skills in Python (including libraries such as PySpark or Pandas) Solid understanding of data modelling, warehousing and architecture design within cloud environments Experience building and managing ETL/ELT workflows and data pipelines at scale Proficiency with SQL and working More ❯
e.g., Mathematics, Statistics, Computer Science, Engineering, Physics, Economics); advanced degree preferred. Strong expertise in statistical modelling, Bayesian inference, and machine learning. Proficient in Python (using libraries such as NumPy, pandas, scikit-learn, PyMC or Stan) Experienced in SQL. Ability to write efficient and robust queries. Demonstrated experience working with large and complex datasets. Ability to communicate complex analytical concepts clearly More ❯
e.g., Mathematics, Statistics, Computer Science, Engineering, Physics, Economics); advanced degree preferred. Strong expertise in statistical modelling, Bayesian inference, and machine learning. Proficient in Python (using libraries such as NumPy, pandas, scikit-learn, PyMC or Stan) Experienced in SQL. Ability to write efficient and robust queries. Demonstrated experience working with large and complex datasets. Ability to communicate complex analytical concepts clearly More ❯
in one or more modern programming languages such as Python, C++, or Java with an understanding of algorithms and data structures. Strong expertise in Python data science stack (NumPy, Pandas) and ML/DL frameworks (scikit-learn, PyTorch, TensorFlow) for end-to-end model development. 3+ years building production-ready ML infrastructure, including data pipelines, training/inference workflows, and More ❯
GitHub Actions, CircleCI - depending on team) Monitoring/Logging (CloudWatch, Prometheus, Grafana - nice-to-have) Data/Scientific Computing/ML Tools Python libraries for data processing, visualization, ML (Pandas, NumPy, Matplotlib, scikit-learn, PyTorch, TensorFlow) Robotics frameworks or internal research computing tools (if applicable) Strong Communication Skills - able to clearly articulate technical concepts to peers, leadership, and cross-functional More ❯
and causal inference models, preferably in pricing, marketplace, or supply chain contexts. Experience with experimental design and statistical inference in real-world business settings. Technical Skills: Proficiency in Python (pandas, NumPy, SciPy, scikit-learn, TensorFlow/PyTorch preferred). Strong SQL skills and experience querying large-scale data platforms (e.g., Snowflake, Redshift). Familiarity with scientific software principles (version control More ❯
San Francisco, California, United States Hybrid/Remote Options
Private Block
probability distributions as well as conditional probabilities Passion for understanding human behavior and improving operational processes that impact real customers Technologies We Use and Teach SQL, Snowflake, etc. Python (Pandas, Numpy) Tableau, Airflow, Looker, Mode, Prefect, dbt More ❯
with substantial hands-on experience using the FastAPI framework Experience with Pydantic for data validation and schema definition in Python applications Deep expertise in data manipulation and analysis using Pandas/Polars and similar Experience in .NET development Skilled in asynchronous and parallel programming, with practical knowledge of asyncio Proficiency working with both structured and semi-structured data, including MongoDB More ❯
and innate curiosity to learn new things Preferred qualifications Background in traditional finance or digital assets, ideally in trading domain Hands-on experience with Python libraries and frameworks (NumPy, Pandas, Airflow, FastAPI, Flask, SQLAlchemy) Highly proficient in asynchronous, event driven distributed systems Working knowledge of cloud-native architectures, GCP preferred Experience in Go and working with real-time data streams More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Swissblock Technologies AG
and innate curiosity to learn new things Preferred qualifications Background in traditional finance or digital assets, ideally in trading domain Hands-on experience with Python libraries and frameworks (NumPy, Pandas, Airflow, FastAPI, Flask, SQLAlchemy) Highly proficient in asynchronous, event driven distributed systems Working knowledge of cloud-native architectures, GCP preferred Experience in Go and working with real-time data streams More ❯
facing technical capacity before. About the LanceDB Team LanceDB was created by experts with decades of experience building tools for data science and machine learning. From co-authors of pandas to Apache PMC of HDFS, Arrow, Iceberg and HBase, the LanceDB team has created open-source tools used by millions worldwide. More ❯
Experience in data mining of large data sets using tools like SAS/R/Python to identify patterns Experience in using tools like Alteryx or Python libraries like Pandas for preprocessing & cleaning data Experience using BI tools like Tableau or PowerBI for self-service analysis and Visualization of data Document processing using OCR conversion techniques Collaborate with data engineering More ❯
end DS/ML projects successfully from concept to production. Proven track record of managing client interactions, presenting technical solutions, and influencing strategic decisions. Expertise in Python programming (NumPy, Pandas, Scikit-learn, Keras/TensorFlow/PyTorch). Strong understanding of statistical modeling, experimental design, and hypothesis testing. Experience with cloud platforms (AWS, Azure, GCP) and MLOps principles. Excellent communication More ❯
additional years of experience in lieu of degree) Proficiency with open-source programming languages (e.g. Python, R, SQL) and machine learning toolkits (e.g. pytorch, numpy, polars, scikit-learn, tensorflow, pandas) Proficiency using mathematical, statistical, or other data-driven analysis Proficiency with APIs; both development/update of internal APIs and use of external APIs Experience with data manipulation, analysis, analytic More ❯
Chevy Chase, Maryland, United States Hybrid/Remote Options
Cogent People
development, including: - Implementation of AI functionality - RAG, MCP, Agentic AI - Familiarity with ML concepts, techniques, and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn, Keras) - Experience with data handling libraries (NumPy, Pandas) and AI/ML data pipelines - Experience designing and implementing data pipelines (batch and streaming) feeding into AI/ML tools Proven experience developing large-scale applications using Java (Spring More ❯
a related field 2+ years of hands-on experience building and maintaining modern data pipelines using python-based ETL/ELT frameworks Strong Python skills , including deep familiarity with pandas and comfort writing production-grade code for data transformation Fluent in SQL , with a practical understanding of data modeling, query optimization, and warehouse performance trade-offs Experience orchestrating data workflows More ❯
building and deploying machine learning models (or equivalent experience through research, internships, or significant personal projects). Proficiency in Python and its core data science libraries (e.g., scikit-learn, pandas, numpy). Experience building and maintaining APIs for data-centric applications (e.g., FastAPI, Flask). A strong foundation in software engineering best practices, including version control (Git), testing, and writing More ❯
Perform data warehousing functions within Snowflake and transform data using dbt. Serve as the subject matter expert in Python and the evolving data science stack-from core libraries like Pandas, NumPy, and Scikit-learn to advanced frameworks like XGBoost and emerging alternatives to drive continuous model innovation and optimization. Implement MLOps practices to streamline the deployment and management of machine More ❯
working with large, heterogeneous datasets. Prior experience in commercial consulting, investment banking, private equity diligence, or a client-oriented analytical role is a must. Skills Strong proficiency in Python (Pandas, NumPy, Scikit-learn) and SQL. Familiarity with cloud data environments (AWS, Azure, or Databricks) is a plus. Ability to synthesize complex data into clear business insights. Strong communication and presentation More ❯
machine learning fundamentals (e.g., model architectures, optimizers, statistical principles) and the data science lifecycle. Strong proficiency in Python's scientific computing and Machine Learning ecosystem (e.g., PyTorch, NumPy, SciPy, Pandas). An understanding of how high-level Machine Learning frameworks interact with low-level hardware (e.g., C CUDA on GPUs). A proactive ownership mindset and the ability to navigate More ❯
Dublin, California, United States Hybrid/Remote Options
Savvymoney
or equivalent experience) 6+ years of professional experience in data science or machine learning, ideally in fintech, financial services, or a B2B2C environment Strong proficiency in SQL and Python (pandas, scikit-learn, PyTorch, TensorFlow, XGBoost, etc.) Hands-on experience with tree-based models (XGBoost, LightGBM, CatBoost) and neural networks Proficiency in notebooks (Jupyter, Colab, etc.) and deep learning frameworks such More ❯
recommendations Experience independently driving data science and engineering solutions Advanced knowledge in using Python and SQL for data analysis over large-scale datasets Skilled using python libraries and packages (Pandas, Polars, PyArrow) in conjunction with the Google Cloud Platform (BigQuery) Comfortable working on AI pipelines (Vertex AI) Knowledge of bash/shell and orchestration tools (e.g. Airflow), is preferred Experience More ❯
in production environments. Strong proficiency in Python and SQL, with plenty of familiarity using popular libraries for machine learning (e.g. scikit-learn, XGBoost, LightGBM, PyTorch) and data manipulation (e.g. Pandas, NumPy, Polars, DuckDB, Dask). Experience applying software engineering best practices to both greenfield and brownfield development (e.g. testing, CI/CD, containerization, observability) Excellent technical communication and collaboration skills More ❯