Terraform, Cloudformation Experience with AWS cloud services: e.g. EC2, RDS, Redshift Even Better: Experience working with large datasets and familiarity with big data infrastructure, such as AWS, Hadoop, Spark, Dask, or MapReduce Experienced with data pipelines, data warehouses, data lakes, and relational databases Experience working in a large cross-functional team Working knowledge of DISA STIGs, vulnerability management, and building More ❯
versioning, testing, CI/CD, API design, MLOps) Building machine learning models and pipelines in Python, using common libraries and frameworks (e.g., TensorFlow, MLFlow) Distributed computing frameworks (e.g., Spark, Dask) Cloud platforms (e.g., AWS, Azure, GCP) and HP computing Containerization and orchestration (Docker, Kubernetes) Ability to scope and effectively deliver projects What we offer Equity options - share in our success More ❯
Docker, and Kubernetes. Strong ownership, accountability, and communication skills. Bonus Points For: Experience leading projects in SCIF environments. Expertise in Cyber Analytics, PCAP, or network monitoring. Familiarity with Spark, Dask, Snowpark, Kafka, or task schedulers like Airflow and Celery. More ❯
Hands-on experience with HPC environments, cloud computing, and containerization technologies. Familiarity with version control systems and collaborative development practices. Preferred Qualifications Experience with distributed computing frameworks (e.g., Spark, Dask). Knowledge of data warehousing and pipeline orchestration tools. Background in scientific computing or large-scale simulation environments. Strong communication skills and ability to translate technical findings into business insights. More ❯
Java, R, or Haskell Additional Qualifications: Experience working in a team environment for Git, GitLab, or GitHub Experience scaling data engineering across distributed computing clusters, including Apache Spark, Nifi, Dask, Airflow, or Luigi Experience with SQL and NoSQL database technologies such as Elasticsearch, Solr, HBase, Accumulo, Cassandra, Weaviate, ChromaDB, Pinecone, DuckDB, Neo4j, AWS DynamoDB, Redshift, Aurora, Oracle, PostgreSQL, MSSQL, MySQL More ❯
Nice-to-Haves: Demonstrated experience leading project development efforts from a SCIF. Familiarity with cybersecurity analytics, including PCAP, CVEs, and network monitoring. Experience integrating with technologies such as Spark, Dask, Snowpark, or Kafka. Background in web application stacks (e.g., Flask, Django) or task schedulers (e.g., Airflow, Celery, Prefect). Compensation & Benefits: Competitive salary, equity, and performance-based bonus. Full benefits More ❯
and 5+ years of experience as a software engineer Nice If You Have: Experience working with and debugging GPU-enabled applications Experience with distributed processing technologies such as Spark, Dask, or Ray for data processing ETL workflows Experience with SQL, Elasticsearch, and Vector databases Experience with HTMX or Hyper-script Experience with HW/SW aspects of multi-node, multi More ❯
Master's degree in AI/ML, Data Science, Computer Science, or related field. Experience with LLMs, AI agents, NLP, or computer vision. Familiarity with distributed data processing (Spark, Dask, Airflow, Luigi). Hands-on experience with data labeling, curation, and model evaluation workflows. DoD 8140 IAT Level II certification (e.g., Security+ or CISSP). Experience in defense technology or More ❯
System Position Desired Skills Experience working with and debugging GPU-enabled applications Familiar with LLM orchestration such as Open API API Experience with distributed processing technologies such as Spark, Dask, or Ray for data processing ETL workflows Experience with SQL, Elasticsearch, and Vector databases Experience with HTMX or Hyper-script Experience with HW/SW aspects of multi-node, multi More ❯
NumPy, asyncio, Cython, or PyArrow. Familiarity with C++ is a plus Bonus++: Experience working with quants, data scientists, or in trading environments Tech Stack: Python 3.11+ (fast, modern, typed) Dask, pandas, PyArrow, NumPy PostgreSQL, Parquet, S3 Airflow, Docker, Kubernetes, GitLab CI Internal frameworks built for scale and speed Why Join: Engineers own projects end-to-end—from design to deployment More ❯
Experience in front-office roles or collaboration with trading desks Familiarity with financial instruments across asset classes (equities, FX, fixed income, derivatives) Experience with distributed computing frameworks (e.g., Spark, Dask) and cloud-native ML pipelines Exposure to LLMs, graph learning, or other advanced AI methods Strong publication record or open-source contributions in ML or quantitative finance Please apply within More ❯
hands-on experience with time-series modelling (ARIMA, VAR, GARCH, Prophet, LSTMs, Transformers, etc.). Proficiency in the Python ecosystem (pandas, scikit-learn, statsmodels, PyTorch/TensorFlow; polars/dask a plus). Familiarity with SQL and handling large datasets. Curiosity and interest in financial markets and macroeconomics . Master's or PhD in a quantitative discipline a strong advantage. More ❯
of-the-art technology (deep learning, VLMs), own a large proprietary dataset of images and videos and use a multi-GPU cluster. We use standard frameworks (python, pytorch, tensorflow, dask, AWS). We impact production directly but also the machine learning community. Responsibilities: You will contribute to research on Vision Language Models (VLMs) for fraud detection, face matching and document More ❯
of-the-art technology (deep learning, VLMs), own a large proprietary dataset of images and videos and use a multi-GPU cluster. We use standard frameworks (python, pytorch, tensorflow, dask, AWS). We impact production directly but also the machine learning community. Responsibilities: You will contribute to research on Vision Language Models (VLMs) for fraud detection, face matching and document More ❯
of-the-art technology (deep learning, VLMs), own a large proprietary dataset of images and videos and use a multi-GPU cluster. We use standard frameworks (python, pytorch, tensorflow, dask, AWS). We impact production directly but also the machine learning community. Responsibilities: You will contribute to research on Vision Language Models (VLMs) for fraud detection, face matching and document More ❯
of-the-art technology (deep learning, VLMs), own a large proprietary dataset of images and videos and use a multi-GPU cluster. We use standard frameworks (python, pytorch, tensorflow, dask, AWS). We impact production directly but also the machine learning community. Responsibilities: You will contribute to research on Vision Language Models (VLMs) for fraud detection, face matching and document More ❯
london (city of london), south east england, united kingdom
Entrust
of-the-art technology (deep learning, VLMs), own a large proprietary dataset of images and videos and use a multi-GPU cluster. We use standard frameworks (python, pytorch, tensorflow, dask, AWS). We impact production directly but also the machine learning community. Responsibilities: You will contribute to research on Vision Language Models (VLMs) for fraud detection, face matching and document More ❯