libraries (pandas, numpy, scikit-learn) and writing clear, modular, reusable code with error handling and logging data visualisation : ability to build interactive dashboards and reports using tools such as Plotly, Dash, Matplotlib or Streamlit cloud and tooling : experience with AWS (S3, Lambda, Bedrock, Athena, or SNS), or other cloud services Azure, or GCP; and familiarity with version control (Git/ More ❯
libraries (pandas, numpy, scikit-learn) and writing clear, modular, reusable code with error handling and logging data visualisation : ability to build interactive dashboards and reports using tools such as Plotly, Dash, Matplotlib or Streamlit cloud and tooling : experience with AWS (S3, Lambda, Bedrock, Athena, or SNS), or other cloud services Azure, or GCP; and familiarity with version control (Git/ More ❯
libraries (pandas, numpy, scikit-learn) and writing clear, modular, reusable code with error handling and logging data visualisation : ability to build interactive dashboards and reports using tools such as Plotly, Dash, Matplotlib or Streamlit cloud and tooling : experience with AWS (S3, Lambda, Bedrock, Athena, or SNS), or other cloud services Azure, or GCP; and familiarity with version control (Git/ More ❯
TensorFlow, and PyTorch Practical experience with Generative AI and exposure to leading LLM platforms (Anthropic, Meta, Amazon , OpenAI) Proficiency with essential data science libraries including Pandas, NumPy, scikit-learn, Plotly/Matplotlib, and Jupyter Notebooks Knowledge of ML-adjacent technologies, including AWS SageMaker and Apache Airflow. Strong skills in data preprocessing, wrangling, and augmentation techniques Experience deploying scalable AI solutions More ❯
emerging trends and technologies in the field of data science. Requirements Proven experience as a data scientist using Python and a range of libraries (Numpy, Pandas, Scikit-Learn, Matplotlib, Plotly etc.). Strong expertise in statistical modelling, machine learning, and data mining techniques. Experience in computer vision is essential. Data engineering (pipelines, databases, infrastructure), ideally with AWS experience would be More ❯
running command line operations in one or more operating systems: o Windows and/or Linux (command line) Experience with two or more visualization tools packages: o e.g. (ggplot, Plotly, matplotlib, D3, Tableau, bokeh) Experience connecting to two or more database and web data sources: o Database: e.g. Postgres, Oracle, SQLite, and/or Arc SDE o Web: e.g API More ❯
algorithms with R, Python, SQL, or NoSQL. Knowledge of distributed data and computing tools such as Hadoop, Hive, Spark, MapReduce, or EMR. Hands-on experience with visualization tools like Plotly, Seaborn, or ggplot2. Security+ certification. More ❯
problem solving skill set. Familiarity with the Software Development Life Cycle (SDLC) process and its various stages, including experience with JIRA and Confluence. Good to have skills: Familiarity with Plotly and Matplotlib for data visualization of large datasets. Experience with cloud architecture applications such as Dataiku or Databricks; competency with ETL tools. Knowledge of regulatory frameworks, RISK, CCAR, and GDPR. More ❯
experience as a frontend engineer Strong proficiency in JavaScript/TypeScript, HTML, and CSS Experience with modern frontend frameworks (React, Vue, or Svelte) Familiarity with data visualization libraries (D3.js, Plotly, Three.js, etc.) Experience in data-heavy scientific or engineering domains is a bonus Why Join Orbital? Competitive salary commensurate with AI sector Flexible and generous paid time off Excellent health More ❯