in data pipelines, utilizing advanced analytics tools and platforms and Python. Experience in scripting, tooling, and automating large-scale computing environments. Extensive experience with major tools such as Python, Pandas, PySpark, NumPy, SciPy, SQL, and Git; Minor experience with TensorFlow, PyTorch, and Scikit-learn. Data Modeling and Design Advanced data modeling (conceptual, logical, and physical) with emphasis on scalability and More ❯
Engineering, AI, or related field. 7+ years of professional software development experience, with at least 3 years in AI/ML. Strong proficiency in Python , including libraries like NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch . Solid understanding of ML algorithms , NLP , deep learning , and statistical methods. Experience with Docker, Kubernetes , and cloud platforms like AWS/Azure/GCP . More ❯
control (Git), and Agile methodologies.Excellent analytical, problem-solving, and communication skills.Preferred SkillsExperience with data engineering, ETL workflows, or big data frameworks (Spark, Airflow).Knowledge of machine learning libraries (NumPy, Pandas, Scikit-learn, TensorFlow, etc.) is a plus.Exposure to DevOps practices, infrastructure as code, and monitoring tools (Jenkins, Terraform, Prometheus).Familiarity with security best practices for Python-based applications.Prior experience in More ❯
end-to-end machine learning operations: model deployment, monitoring, and retraining, supporting integration with production data pipelines and API services. Proficient with Python, especially machine learning libraries like NumPy, Pandas, Scikit-Learn, and PyTorch Proficient with SQL, including transactional ( e.g. , PostgreSQL) and analytical ( e.g. , BigQuery) databases Professional experience with most, if not all, of the following: Containerization ( e.g. , Kubernetes and More ❯
PostgreSQL, SQL Server, Snowflake, Redshift, Presto, etc Experience building ETL and stream processing pipelines using Kafka, Spark, Flink, Airflow/Prefect, etc. Familiarity with data science stack: e.g. Juypter, Pandas, Scikit-learn, Dask, Pytorch, MLFlow, Kubeflow, etc. Strong experience with using AWS/Google Cloud Platform (S3S, EC2E, IAM, etc, Kubernetes and Linux in production Strong proclivity for automation and More ❯
tools and frameworks, such as e.g., Apache Airflow, dbt, Informatica, Talend Proficiency in at least one scripting or programming language, such as Python, with an understanding of libraries like Pandas or NumPy for data manipulation Project management skills Great numerical and analytical skills Excellent problem-solving skills Have attention to detail and excellent communication skills, both written and verbal Have More ❯
search/auction experience. Expert in causal methods (uplift modeling, DML, IV, DiD/synth control, BSTS/Bayesian time series) and experimental design . Strong software engineering: Python (pandas, numpy, scikit-learn, LightGBM/XGBoost), SQL; experience with Spark and one of AWS/GCP/Azure. Hands-on with A/B frameworks , power analysis, and measurement diagnostics More ❯
models. Create automation and tooling that enhance efficiency and insight generation. Promote best practices in data governance and software design. You’ll need: Strong programming skills in Python (NumPy, Pandas, PyTorch, TensorFlow, or Scikit-learn). Experience building data pipelines and working with structured/unstructured data (SQL/NoSQL). Familiarity with the AI/ML model lifecycle from More ❯
models. Create automation and tooling that enhance efficiency and insight generation. Promote best practices in data governance and software design. You’ll need: Strong programming skills in Python (NumPy, Pandas, PyTorch, TensorFlow, or Scikit-learn). Experience building data pipelines and working with structured/unstructured data (SQL/NoSQL). Familiarity with the AI/ML model lifecycle from More ❯
alerting systems. Work closely with DevOps and infrastructure teams to deploy solutions in cloud and on-prem environments. Required Skills & Experience Strong proficiency in Python , including libraries such as Pandas, NumPy, and PySpark. Experience with data engineering tools (e.g., Airflow, Kafka, SQL, Parquet). Solid understanding of commodities markets , trading workflows, and financial instruments. Familiarity with cloud platforms (AWS, Azure More ❯
alerting systems. Work closely with DevOps and infrastructure teams to deploy solutions in cloud and on-prem environments. Required Skills & Experience Strong proficiency in Python , including libraries such as Pandas, NumPy, and PySpark. Experience with data engineering tools (e.g., Airflow, Kafka, SQL, Parquet). Solid understanding of commodities markets , trading workflows, and financial instruments. Familiarity with cloud platforms (AWS, Azure More ❯
for efficiency. Enhance and create advanced data visualisation applications. Requirements: Proficient in Software Python Development. 3-5 years experience in software engineering Experience with libraries/frameworks such as Pandas, Numpy, Scipy, etc. Skilled in data pipeline orchestration management libraries (e.g., Airflow, Prefect). Experience with cloud infrastructure (AWS, GCP, Azure). DevOps skills (CI/CD, containerisation). Familiarity More ❯
for efficiency. Enhance and create advanced data visualisation applications. Requirements: Proficient in Software Python Development. 3-5 years experience in software engineering Experience with libraries/frameworks such as Pandas, Numpy, Scipy, etc. Skilled in data pipeline orchestration management libraries (e.g., Airflow, Prefect). Experience with cloud infrastructure (AWS, GCP, Azure). DevOps skills (CI/CD, containerisation). Familiarity More ❯
Git), and Agile methodologies.Excellent analytical, problem-solving, and communication skills. Preferred Skills Experience with data engineering, ETL workflows, or big data frameworks (Spark, AirflowKnowledge of machine learning libraries (NumPy, Pandas, Scikit-learn, TensorFlow, etc is a plus.Exposure to DevOps practices, infrastructure as code, and monitoring tools (Jenkins, Terraform, PrometheusFamiliarity with security best practices for Python-based applications.Prior experience in domains More ❯
in data science, machine learning, or a related field, with a track record of delivering impactful data solutions. Proficiency in Python and experience with data science libraries such as Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch. Strong experience with SQL and working knowledge of data warehouse systems such as Amazon Redshift, Google BigQuery, or Snowflake. Expertise in statistical analysis, machine More ❯
technologies quickly and adapt to a rapidly changing environment Proficiency in ML/Data Science languages and tools including Python (and packages including but not limited to Jupyter notebooks, pandas, numpy, scikit-learn, pytorch) and SQL Proficiency in using SQL and noSQL databases, including PostgreSQL, ElasticSearch Proficiency in NLP tools including, spaCy, nltk Superb coding, scripting, and software engineering experience More ❯
data modeling. Mentor junior developers and foster a culture of technical excellence and collaboration. What We’re Looking For 5+ years’ experience in Python and its scientific libraries (e.g. pandas, NumPy, SciPy). Strong understanding of cloud infrastructure (AWS preferred; Azure/GCP also welcome). Proven experience in system design and data modelling for scalable applications. Solid grasp of More ❯
data modeling. Mentor junior developers and foster a culture of technical excellence and collaboration. What We’re Looking For 5+ years’ experience in Python and its scientific libraries (e.g. pandas, NumPy, SciPy). Strong understanding of cloud infrastructure (AWS preferred; Azure/GCP also welcome). Proven experience in system design and data modelling for scalable applications. Solid grasp of More ❯
programming languages such as Python, Java, or C++. Strong understanding of machine learning frameworks such as TensorFlow, PyTorch, or Keras. Experience with data processing and analysis tools like SQL, Pandas, or NumPy. Familiarity with cloud platforms like AWS, Google Cloud, or Azure for AI deployment. Excellent problem-solving skills and ability to work in a fast-paced environment. More ❯
IT and regulatory standards. Skills & Experience Strong proficiency in Python, with a focus on rapid prototyping and data driven applications. Solid understanding of multi-processing and AsyncIO. Experience with pandas, NumPy, and SQL for data analysis and transformation. Exposure to REST APIs, messaging systems, and integration with trading or risk platforms. Hands-on experience building front-end tools or GUIs. More ❯
IT and regulatory standards. Skills & Experience Strong proficiency in Python, with a focus on rapid prototyping and data driven applications. Solid understanding of multi-processing and AsyncIO. Experience with pandas, NumPy, and SQL for data analysis and transformation. Exposure to REST APIs, messaging systems, and integration with trading or risk platforms. Hands-on experience building front-end tools or GUIs. More ❯
and more on reliability and fixes. Key Skills: Investigating and debugging complex data flow and Machine Learning issues within a live, high impact production environment. Extensive Python, NumPy and Pandas is required for this role. You must demonstrate a deep commercial background in the following areas: Extensive Python: Very strong, production-level Python coding and debugging skills. Production Environment: Proven More ❯
performance ETL pipelines and applications from scratch; Must have 1 year of experience working with data visualization tools including Plotly and Streamlit and computational and data manipulation packages including pandas, sk-learn, statsmodel, and cvxpy; Must have 1 year of experience applying knowledge of basic Machine Learning models including CNN, LSTM, SVM to sentiment analysis functions. Salary More ❯
Mentor junior engineers and contribute to engineering best practices Required Skills & Experience: 5+ years of experience building and maintaining data pipelines in production environments Strong Python and SQL skills (Pandas, PySpark, query optimisation) Cloud experience (AWS preferred) including S3, Redshift, Glue, Lambda Familiarity with data warehousing (Redshift, Snowflake, BigQuery) Experience with workflow orchestration tools (Airflow, Dagster, Prefect) Understanding of distributed More ❯
Mentor junior engineers and contribute to engineering best practices Required Skills & Experience: 5+ years of experience building and maintaining data pipelines in production environments Strong Python and SQL skills (Pandas, PySpark, query optimisation) Cloud experience (AWS preferred) including S3, Redshift, Glue, Lambda Familiarity with data warehousing (Redshift, Snowflake, BigQuery) Experience with workflow orchestration tools (Airflow, Dagster, Prefect) Understanding of distributed More ❯