plan on securing series A funding late this year. They code in Python, and React on the Frontend. Tech & Data Science stack: Kubernetes & Docker on Google Cloud Python 3: Pandas, RabbitMQ, Celery, Flask, SciPy, NumPy, Dash, Plotly, Matplotlib Javascript, React, Redux PostgreSQL, Redis Prometheus, Alert Manager, DataDog If you joined the company in a Data Science role you would be … modelling skills but know when to be pragmatic to ensure the best business outcomes You'll be a coder in Python, C++ or Java Experience of productionizing analytics code pandas, scipy and numpy If your a Data Scientist looking to go on an exciting new journey with an early stage startup, and the opportunity to work on advanced pricing algorithms More ❯
our growing forecast offering. Developing our power market and dispatch models to grow and enhance Modo's product offering. We use Python and the standard scientific computing stack (Numpy, Pandas, SciPy, ScikitLearn, etc.). Working closely with our product and analytics functions to ensure the product we deliver aligns closely with user needs and provides value to the wider Modo … team. Qualifications: 3 to 5 years experience using Python (or another programming language e.g. R, C++, Java) and with the scientific computing stack (Numpy, Pandas, SciPy, ScikitLearn, etc.). Strong quantitative skills and a proven track record of solving complex technical problems using data analysis, machine learning, and optimization techniques. Hands-on experience with cloud environments (e.g., AWS) for deploying More ❯
our growing forecast offering. Developing our power market and dispatch models to grow and enhance Modo's product offering. We use Python and the standard scientific computing stack (Numpy, Pandas, SciPy, ScikitLearn, etc.). Working closely with our product and analytics functions to ensure the product we deliver aligns closely with user needs and provides value to the wider Modo … team. Qualifications: 3 to 5 years experience using Python (or another programming language e.g. R, C++, Java) and with the scientific computing stack (Numpy, Pandas, SciPy, ScikitLearn, etc.). A degree in a quantitative field such as mathematics, engineering, computer science, physics or a related discipline. Previous experience in energy modelling, with a specific focus on the GB and/ More ❯
our growing forecast offering. Developing our power market and dispatch models to grow and enhance Modo's product offering. We use Python and the standard scientific computing stack (Numpy, Pandas, SciPy, ScikitLearn, etc.). Working closely with our product and analytics functions to ensure the product we deliver aligns closely with user needs and provides value to the wider Modo … team. Qualifications: 3 to 5 years experience using Python (or another programming language e.g. R, C++, Java) and with the scientific computing stack (Numpy, Pandas, SciPy, ScikitLearn, etc.). A degree in a quantitative field such as mathematics, engineering, computer science, physics or a related discipline. Previous experience in energy modelling, with a specific focus on the GB and/ More ❯
discipline Understanding of NLP algorithms and techniques and/or experience with Large Language Models (fine tuning, RAG, agents) Are proficient with Python, including open-source data libraries (e.g Pandas, Numpy, Scikit learn etc.) Have experience productionising machine learning models Are an expert in one of predictive modeling, classification, regression, optimisation or recommendation systems Have experience with Spark Have knowledge … information systems (GIS) Experience with cloud infrastructure Experience with graph technology and/or algorithms Our technology stack Python and associated ML/DS libraries (Scikit-learn, Numpy, LightlGBM, Pandas, LangChain/LangGraph, TensorFlow, etc ) PySpark AWS cloud infrastructure: EMR, ECS, Athena, etc. MLOps: Terraform, Docker, Airflow, MLFlow More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous More ❯
London, England, United Kingdom Hybrid / WFH Options
SGI
Snowflake, AWS, and Python Work directly with investment professionals to scope and deliver solutions What we’re looking for: Strong Python engineering skills, including use of data libraries like pandas and NumPy Experience working in front office environments Familiarity with data pipelines and cloud infrastructure Pragmatic, detail-oriented, and able to work independently Experienced Python-focused Software/Data Engineer More ❯
s degree in Computer Science, Software Engineering, or a related field. 3+ years of experience developing financial tools, particularly in credit-related products. Strong Python skills, with experience using Pandas, NumPy, and Python-Dash. Knowledge of fixed income risk, P&L attribution analysis, and credit derivatives. Experience with SQL for querying databases and integrating data sources. Familiarity with financial data More ❯
and iterate on solutions. Requirements 1-4 years of experience in a quantitative, analytics, or developer role within a financial institution or trading environment. Strong proficiency in Python (e.g., Pandas, NumPy, Jupyter) and experience building data pipelines , analytical tools , or dashboards . SQL experience is a plus. Proficiency in Excel and data visualization platforms such as Power BI, Tableau, or More ❯
and iterate on solutions. Requirements 1-4 years of experience in a quantitative, analytics, or developer role within a financial institution or trading environment. Strong proficiency in Python (e.g., Pandas, NumPy, Jupyter) and experience building data pipelines , analytical tools , or dashboards . SQL experience is a plus. Proficiency in Excel and data visualization platforms such as Power BI, Tableau, or More ❯
SNS/SQS, IAM, CloudWatch). Strong SQL programming skills, including analytical queries and query optimization. Experience with unstructured/graph databases . Proficient in Python and libraries like Pandas, Boto3. Solid understanding of data platform architecture and data governance principles. Familiarity with Snowflake , CI/CD , orchestration tools (e.g. AirFlow, Step Functions, Prefect) and data visualization tools . Agile More ❯
position is temporary until 31.12.2025, with possibility to extend further. Your profile: 3+ years of experience as a data engineer. Proficiency in Python and SQL. Expertise transforming data using Pandas and PySpark . Experience in building big data pipelines. In-depth knowledge of Apache Airflow and similar orchestration/ETL tools. Expertise in designing, populating and retrieving data with PostgreSQL More ❯
managing and querying complex datasets; Driving the use of clean architecture and sound engineering practices to ensure long-term scalability and performance; Contributing directly to development using Python (with Pandas, SQLAlchemy, Selenium) and JavaScript (React, Redux, Material UI); Participating in Agile SCRUM rituals (Sprints, Standups, Planning), ensuring team alignment and velocity; Mentoring and onboarding developers, and fostering a culture of More ❯
improvement in the development pipeline. Requirements Minimum Qualifications: - 7+ years of professional experience in Python development and data processing. - Deep expertise with Python data processing libraries such as PySpark, Pandas, and NumPy. - Strong experience with API development using FastAPI or similar frameworks. - Proficiency in test-driven development using PyTest and mocking libraries. - Advanced understanding of cloud-native software architectures and More ❯
Frameworks : Scikit-learn, XGBoost, LightGBM, StatsModels PyCaret, Prophet, or custom implementations for time series A/B testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working More ❯
Frameworks : Scikit-learn, XGBoost, LightGBM, StatsModels PyCaret, Prophet, or custom implementations for time series A/B testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working More ❯
functional teams to analyze requirements and architect high-quality software solutions. Develop IT solutions using Python and related libraries for AI applications. Build scalable data pipelines utilizing technologies like Pandas, NumPy, or Spark. Apply Agile and DevOps practices to deliver effective solutions. Understand and translate business requirements into technical specifications. Identify opportunities for enhancing DevOps processes and explore new technologies. More ❯
to detail, and clear communication skills Nice To Have: Prior exposure with execution algos, TCA, order-routing, or market-impact modelling Knowledge of statistical or machine-learning libraries (NumPy, pandas, scikit-learn, PyTorch) Experience building distributed systems with message buses (Kafka, ZeroMQ) and asynchronous I/O Experience with cloud or on-prem orchestration and scheduling frameworks (Kubernetes, HT Condor More ❯
or mentorship. Have good communication skills. Nice to have Experience deploying LLMs and agent-based systems Our technology stack Python and associated ML/DS libraries (scikit-learn, numpy, pandas, LightGBM, LangChain/LangGraph, TensorFlow, etc ) PySpark AWS cloud infrastructure: EMR, ECS, ECR, Athena, etc. MLOps: Terraform, Docker, Spacelift, Airflow, MLFlow Monitoring: New Relic CI/CD: Jenkins, Github Actions More ❯
in the Natural Language domain or when working with other unstructured data. Solid understanding of Python, including writing modular Pythonic code, familiarity with core Python data structures, fluency with pandas, and experience with unit testing. Desirable skills & experience Hands-on work experience with Google Cloud Platform (GCP) implementations. Experience in implementing and supporting Machine Learning systems, including automating data validation More ❯
in a healthcare/life science organization is considered an asset. Other skills we are searching for are: Programming Skills: Proficiency in Python, data analytics, deep learning (Scikit-learn, Pandas, PyTorch, Jupyter, pipelines), and practical knowledge of data tools like Databricks, Ray, Vector Databases, Kubernetes, and workflow scheduling tools such as Apache Airflow, Dagster, and Astronomer. GPU Computing: Familiarity with More ❯
future. We expect this team to work across multiple technologies around a core, includingPython, AWS (EKS, S3, Lambda, CDK, etc.), AI/ML/Data Science libraries (LLM tooling, pandas, etc.) About Role: We are hiring several Software Engineers of differing levels to build AI capabilities through the delivery of AI-powered features within PatentSight+ and internal AI tools. PatentSight+ More ❯
Mc Lean, Virginia, United States Hybrid / WFH Options
MITRE
D3.js. • Demonstrated ability to manipulate large financial datasets and time series data and perform calculations with at least one modern programming language like Python (utilizing packages like scikit-learn, pandas, or dask), R (utilizing packages like caret, dplyr, or data.table), or other modern language. • Ability to apply, modify and formulate algorithms and processes to solve computational financial problems. • Desire and More ❯
Arlington, Virginia, United States Hybrid / WFH Options
CGI
delivering AI/ML solutions, using techniques such as machine learning, deep learning, natural language processing, and computer vision Knowledge of ML Algorithms (Regression, Classification, Clustering) and Data Libraries (Pandas, NumPy) Experience with automated testing strategies and tools including Selenium, Cypress or Playwright Proficient in one or more programming languages, such as Python, R, Java, C# Familiarity working with cloud More ❯
Write modular, testable, and production-ready Python code for data preprocessing, feature engineering, and model training. Build and maintain scalable Python-based data pipelines using tools like Airflow and Pandas . Deploy AI models in production using Python microservices and container technologies (e.g., Docker, FastAPI, Flask). Collaborate with MLOps engineers to automate CI/CD pipelines for model training … a related technical field. 3+ years of professional experience in AI/ML development using Python . Strong knowledge of the Python ecosystem for machine learning (e.g., TensorFlow, PyTorch, Pandas, NumPy, FastAPI). Experience deploying Python-based ML models in production environments (on cloud platforms like Azure or AWS). Familiar with MLOps concepts, data versioning (DVC), and model lifecycle More ❯
Write modular, testable, and production-ready Python code for data preprocessing, feature engineering, and model training. Build and maintain scalable Python-based data pipelines using tools like Airflow and Pandas . Deploy AI models in production using Python microservices and container technologies (e.g., Docker, FastAPI, Flask). Collaborate with MLOps engineers to automate CI/CD pipelines for model training … a related technical field. 3+ years of professional experience in AI/ML development using Python . Strong knowledge of the Python ecosystem for machine learning (e.g., TensorFlow, PyTorch, Pandas, NumPy, FastAPI). Experience deploying Python-based ML models in production environments (on cloud platforms like Azure or AWS). Familiar with MLOps concepts, data versioning (DVC), and model lifecycle More ❯