tools and frameworks, such as e.g., Apache Airflow, dbt, Informatica, Talend Proficiency in at least one scripting or programming language, such as Python, with an understanding of libraries like Pandas or NumPy for data manipulation Project management skills Great numerical and analytical skills Excellent problem-solving skills Have attention to detail and excellent communication skills, both written and verbal Have More ❯
Abingdon, Oxfordshire, United Kingdom Hybrid/Remote Options
NES Fircroft
Tools for scalable data processing: Kubernetes, Spark â Experience with Java 2D graphics and 3D OpenGL programming. â Experience with scientific computing libraries and frameworks: o Python: NumPy, SciPy, Pandas, TensorFlow (for ML/AI) o C Java: CUDA (for GPU acceleration) o Angular or React o Microservice: Quarkus, Spring Boot, AWS API Gateway o Docker, Kubernetes With over More ❯
framework experience (SQLAlchemy) Containerization familiarity with Docker and/or Kubernetes Cloud platform knowledge (AWS preferred, but also Azure, GCP) Plusses: Experience working with data scientists Data library experience (Pandas and/or NumPy) Knowledge of microservices architecture and RESTful API design Integration experience with LangChain or similar AI frameworks to build AI based workflows Technical - python, containerization, cloud platform More ❯
and more on reliability and fixes. Key Skills: Investigating and debugging complex data flow and Machine Learning issues within a live, high impact production environment. Extensive Python, NumPy and Pandas is required for this role. You must demonstrate a deep commercial background in the following areas: Extensive Python: Very strong, production-level Python coding and debugging skills. Production Environment: Proven More ❯
performance ETL pipelines and applications from scratch; Must have 1 year of experience working with data visualization tools including Plotly and Streamlit and computational and data manipulation packages including pandas, sk-learn, statsmodel, and cvxpy; Must have 1 year of experience applying knowledge of basic Machine Learning models including CNN, LSTM, SVM to sentiment analysis functions. Salary More ❯
design) Strong background in AI/ML with experience using frameworks such as TensorFlow, PyTorch, or Scikit-learn Proficiency in data handling and manipulation using libraries like NumPy and Pandas Experience with SQL databases for managing and accessing training data Knowledge of model deployment and scaling in enterprise or cloud environments (AWS, Azure, or GCP) Familiarity with containerization and orchestration More ❯
matter expert for pricing models and valuation logic, supporting risk and trading teams globally. Skills and Experience Expert-level Python developer with strong experience in numerical computing (NumPy, SciPy, Pandas). Deep understanding of derivatives pricing theory, volatility modelling, and stochastic calculus. Experience with calibration, curve bootstrapping, and risk measures (Greeks, sensitivities, VaR). Background in pricing and risk models More ❯
implementing ETL/ELT pipelines and data workflows. Hands-on experience with AWS data services, such as S3, Glue, Redshift, Lambda, and IAM. Proficiency in Python for data engineering (pandas, boto3, pyodbc, etc.). Solid understanding of data modeling, relational databases, and schema design. Familiarity with version control, CI/CD, and automation practices. Ability to collaborate with data scientists More ❯
San Francisco, California, United States Hybrid/Remote Options
Block
working with a cross-functional, globally distributed team Natural curiosity and desire to grow and help shape all aspects of our team Technologies We Use and Teach Python (NumPy, Pandas, Scikit learn, PyTorch, etc.) with occasional Kotlin and Java Snowflake, GCP, AWS, and orchestration tools such as Prefect and Airflow Transformer models (BERT, LLMs, etc.) We're working to build More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
continuous improvement initiatives Essential Skills for the AWS Data Engineer: Extensive hands-on experience with AWS data services Strong programming skills in Python (including libraries such as PySpark or Pandas) Solid understanding of data modelling, warehousing and architecture design within cloud environments Experience building and managing ETL/ELT workflows and data pipelines at scale Proficiency with SQL and working More ❯
Essential Skills and Experience: Proven ability to solve complex, real-world problems through data science and analytics. Experience coaching and reviewing work of junior team members. Strong Python skills (pandas, numpy, scikit-learn) and a solid grounding in probability and statistics. Deep knowledge of machine learning methods and their practical application. Experience managing multiple end-to-end data science projects More ❯
delivery. Requirements 8+ years of experience delivering data-centric platforms with large datasets, fast SLAs, and high data quality standards. Advanced proficiency in Python, including data processing libraries (e.g., Pandas, PySpark). Strong hands-on experience with AWS data stack (S3, Glue, Athena, EMR, SageMaker) as well as orchestration and warehousing tools like Airflow and Snowflake. Proven track record building More ❯
and causal inference models, preferably in pricing, marketplace, or supply chain contexts. Experience with experimental design and statistical inference in real-world business settings. Technical Skills: Proficiency in Python (pandas, NumPy, SciPy, scikit-learn, TensorFlow/PyTorch preferred). Strong SQL skills and experience querying large-scale data platforms (e.g., Snowflake, Redshift). Familiarity with scientific software principles (version control More ❯
and causal inference models, preferably in pricing, marketplace, or supply chain contexts. Experience with experimental design and statistical inference in real-world business settings. Technical Skills: Proficiency in Python (pandas, NumPy, SciPy, scikit-learn, TensorFlow/PyTorch preferred). Strong SQL skills and experience querying large-scale data platforms (e.g., Snowflake, Redshift). Familiarity with scientific software principles (version control More ❯
end DS/ML projects successfully from concept to production. Proven track record of managing client interactions, presenting technical solutions, and influencing strategic decisions. Expertise in Python programming (NumPy, Pandas, Scikit-learn, Keras/TensorFlow/PyTorch). Strong understanding of statistical modeling, experimental design, and hypothesis testing. Experience with cloud platforms (AWS, Azure, GCP) and MLOps principles. Excellent communication More ❯
Chevy Chase, Maryland, United States Hybrid/Remote Options
Cogent People
development, including: - Implementation of AI functionality - RAG, MCP, Agentic AI - Familiarity with ML concepts, techniques, and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn, Keras) - Experience with data handling libraries (NumPy, Pandas) and AI/ML data pipelines - Experience designing and implementing data pipelines (batch and streaming) feeding into AI/ML tools Proven experience developing large-scale applications using Java (Spring More ❯
building and deploying machine learning models (or equivalent experience through research, internships, or significant personal projects). Proficiency in Python and its core data science libraries (e.g., scikit-learn, pandas, numpy). Experience building and maintaining APIs for data-centric applications (e.g., FastAPI, Flask). A strong foundation in software engineering best practices, including version control (Git), testing, and writing More ❯
working with large, heterogeneous datasets. Prior experience in commercial consulting, investment banking, private equity diligence, or a client-oriented analytical role is a must. Skills Strong proficiency in Python (Pandas, NumPy, Scikit-learn) and SQL. Familiarity with cloud data environments (AWS, Azure, or Databricks) is a plus. Ability to synthesize complex data into clear business insights. Strong communication and presentation More ❯
WHAT WE NEED FROM YOU: Required: 7+ Years of professional work experience in Python software development Extensive experience with core AI/ML libraries (e.g., TensorFlow, PyTorch, scikit-learn, Pandas, XGBoost) Experience with Python RESTful APIs using Fast API, ensuring effective communication between backend systems and applications Experience with relational and/or NoSQL databases (e.g., SQL, Google FireStore, Azure More ❯
recommendations Experience independently driving data science and engineering solutions Advanced knowledge in using Python and SQL for data analysis over large-scale datasets Skilled using python libraries and packages (Pandas, Polars, PyArrow) in conjunction with the Google Cloud Platform (BigQuery) Comfortable working on AI pipelines (Vertex AI) Knowledge of bash/shell and orchestration tools (e.g. Airflow), is preferred Experience More ❯
SparkSQL, BigQuery, Snowflake, Databricks), with high proficiency with query languages like SQL Expert in Python or R and numerical and scientific libraries used for statistics and machine learning (e.g., Pandas, NumPy, SciPy, scikit-learn) Experience in building production level models in cloud environments (e.g., AWS, Azure) Demonstrated proficiency in time series modeling (GAMs, GLMs, ARs) and real-time forecasting are More ❯
SparkSQL, BigQuery, Snowflake, Databricks), with high proficiency with query languages like SQL Expert in Python or R and numerical and scientific libraries used for statistics and machine learning (e.g., Pandas, NumPy, SciPy, scikit-learn) Experience in building production level models in cloud environments (e.g., AWS, Azure) Demonstrated proficiency in time series modeling (GAMs, GLMs, ARs) and real-time forecasting are More ❯
Charlotte, North Carolina, United States Hybrid/Remote Options
Brightspeed
natural language processing or similar disciplines Demonstrated experience in algorithm selection, model training, feature engineering, fine-tuning performance and implementation Proficiency in Python and key data science libraries, including Pandas, NumPy, Scikit-learn, Matplotlib and/or Seaborn Strong capabilities in SQL and experience working with SQL query engines Experience with Data Visualization tools such as Power BI Transformation & Process More ❯
solving skills and the ability to turn abstract business and product ideas into concrete data science and engineering solutions. Expert-level coding abilities in Python for data science (e.g., Pandas, NumPy, Scikit-learn) and mastery of complex SQL across large datasets. Ability to effectively communicate complex technical concepts to both technical and non-technical audiences. Desire to thrive in a More ❯