Washington, Washington DC, United States Hybrid / WFH Options
BLN24
Ability to manage multiple projects and priorities effectively. Preferred Skills: Experience with cloud-based data lake solutions (e.g., AWS, Azure, Google Cloud). Familiarity with big data technologies (e.g., Hadoop, Spark). Knowledge of data governance and security best practices. Experience with ETL processes and tools. What BLN24 brings to the Game: BLN24 benefits are game changing. We like More ❯
Washington, Washington DC, United States Hybrid / WFH Options
BLN24
solving and analytical skills. Strong communication and collaboration abilities. Preferred Skills: Experience with cloud-based data platforms (e.g., AWS, Azure, Google Cloud). Familiarity with big data technologies (e.g., Hadoop, Spark). Knowledge of data governance and security best practices. Experience with ETL processes and tools. What BLN24 brings to the Game: BLN24 benefits are game changing. We like More ❯
OpenCV. Knowledge of ML model serving infrastructure (TensorFlow Serving, TorchServe, MLflow). Knowledge of WebGL, Canvas API, or other graphics programming technologies. Familiarity with big data technologies (Kafka, Spark, Hadoop) and data engineering practices. Background in computer graphics, media processing, or VFX pipeline development. Experience with performance profiling, system monitoring, and observability tools. Understanding of network protocols, security best More ❯
in a technology company - Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience operating large data warehouses - Bachelor's/Master's degree in computer science, engineering, analytics, mathematics, statistics, IT or equivalent Amazon is an equal opportunities More ❯
Python, Go, Julia etc.) •Experience with Amazon Web Services (S3, EKS, ECR, EMR, etc.) •Experience with containers and orchestration (e.g. Docker, Kubernetes) •Experience with Big Data processing technologies (Spark, Hadoop, Flink etc) •Experience with interactive notebooks (e.g. JupyterHub, Databricks) •Experience with Git Ops style automation •Experience with ix (e.g, Linux, BSD, etc.) tooling and scripting •Participated in projects that More ❯
years, associate's with 10 years, bachelor's with 8 years, master's with 6 years, or PhD with 4 years Deep expertise in big data platforms (e.g., Hadoop, Spark, Kafka) and multi-cloud environments (AWS, Azure, GCP) Experience with machine learning frameworks (e.g., TensorFlow, Scikit-learn, PyTorch) Strong programming skills in Python, Java, or Scala Familiarity with data More ❯
Proficiency in data science languages and tools (e.g., Python, R, SQL, Jupyter, Pandas, Scikit-learn) Experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and big data platforms (e.g., Spark, Hadoop) Strong background in statistics, data modeling, and algorithm development Ability to explain complex data findings to technical and non-technical stakeholders Experience supporting national security or defense data programs More ❯
and non-relational databases to build data solutions, such as SQL Server/Oracle, experience with relational and dimensional data structures Experience in using distributed frameworks (Spark, Flink, Beam, Hadoop) Proficiency in infrastructure as code (IaC) using Terraform Experience with CI/CD pipelines and related tools/frameworks Good knowledge of containers (Docker, Kubernetes, etc.) Experience with GCP More ❯
in the schedule. • Must be willing/able to help open/close the workspace during regular business hours as needed. Preferred Requirements • Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. • Experience with containers, EKS, Diode, CI/CD, and Terraform are a plus. Benefits More ❯
needs US citizenship and an active Secret clearance; will also accept TS/SCI or TS/SCI with CI Polygraph Desired Experience: Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Work could possibly require some on-call work. The Swift Group and Subsidiaries are an Equal More ❯
needs US citizenship and an active Secret clearance; will also accept TS/SCI or TS/SCI with CI Polygraph Desired Experience: Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Work could possibly require some on-call work. The Swift Group and Subsidiaries are an Equal More ❯
needs US citizenship and an active Secret clearance; will also accept TS/SCI or TS/SCI with CI Polygraph Desired Experience: Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Work could possibly require some on-call work. The Swift Group and Subsidiaries are an Equal More ❯
utilising strong communication and stakeholder management skills when engaging with customers Significant experience of coding in Python and Scala or Java Experience with big data processing tools such as Hadoop or Spark Cloud experience; GCP specifically in this case, including services such as Cloud Run, Cloud Functions, BigQuery, GCS, Secret Manager, Vertex AI etc. Experience with Terraform Prior experience More ❯
Falls Church, Virginia, United States Hybrid / WFH Options
Rackner
Rancher) with CI/CD Apply DevSecOps + security-first practices from design to delivery Tech You'll Touch AWS Python FastAPI Node.js React Terraform Apache Airflow Trino Spark Hadoop Kubernetes You Have Active Secret Clearance 3+ years in Agile, cloud-based data engineering Experience with API design, ORM + SQL, AWS data services Bonus: AI/ML, big More ❯
your way around complex joins and large datasets. Git - Version control is second nature. You know how to branch, commit, and collaborate cleanly. Bonus Skills (nice to have): ApacheHadoop, Spark/Docker, Kubernetes/Grafana, Prometheus, Graylog/Jenkins/Java, Scala/Shell scripting Team ️ Our Tech Stack We build with the tools we love (and we More ❯
your way around complex joins and large datasets. Git - Version control is second nature. You know how to branch, commit, and collaborate cleanly. Bonus Skills (nice to have): ApacheHadoop, Spark/Docker, Kubernetes/Grafana, Prometheus, Graylog/Jenkins/Java, Scala/Shell scripting Team ️ Our Tech Stack We build with the tools we love (and we More ❯
Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies e.g. NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable for Data Devops Engineer: Jupyter Hub Awareness More ❯
and model evaluation. Deployment experience using Docker or other containerisation tools. Exposure to GPU-based environments for large-scale model training and tuning. Experience with big data tools (Spark, Hadoop) and cloud platforms (AWS, GCP, Azure) is a plus. Strong analytical mindset with the ability to translate data into actionable insights. If you or someone you know of might More ❯
with Spark. Experience building, maintaining, and debugging DBT pipelines. Strong proficiency in developing, monitoring, and debugging ETL jobs. Deep understanding of SQL and experience with Databricks, Snowflake, BigQuery, Azure, Hadoop, or CDP environments. Hands-on technical support experience, including escalation management and adherence to SLAs. Familiarity with CI/CD technologies and version control systems like Git. Expertise in More ❯
and Cross-Venue Surveillance Techniques particularly with vendors such as TradingHub, Steeleye, Nasdaq or NICE Statistical analysis and anomaly detection Large-scale data engineering and ETL pipeline development (Spark, Hadoop, or similar) Market microstructure and trading strategy expertise Experience with enterprise-grade surveillance systems in banking. Integration of cross-product and cross-venue data sources Regulatory compliance (MAR, MAD More ❯
Tableau) Machine Learning Fundamentals (e.g., Supervised, Unsupervised Learning) Machine Learning Algorithms (e.g., Regression, Classification, Clustering, Decision Trees, SVMs, Neural Networks) Model Evaluation and Validation Big Data Technologies (e.g., Spark, Hadoop - conceptual understanding) Database Querying (e.g., SQL) Cloud-based Data Platforms (e.g., AWS Sagemaker, Google AI Platform, Azure ML) Ethics in Data Science and AI Person Specification: Experience supporting data More ❯
Tableau) Machine Learning Fundamentals (e.g., Supervised, Unsupervised Learning) Machine Learning Algorithms (e.g., Regression, Classification, Clustering, Decision Trees, SVMs, Neural Networks) Model Evaluation and Validation Big Data Technologies (e.g., Spark, Hadoop - conceptual understanding) Database Querying (e.g., SQL) Cloud-based Data Platforms (e.g., AWS Sagemaker, Google AI Platform, Azure ML) Ethics in Data Science and AI Person Specification: Experience supporting data More ❯
Tableau) Machine Learning Fundamentals (e.g., Supervised, Unsupervised Learning) Machine Learning Algorithms (e.g., Regression, Classification, Clustering, Decision Trees, SVMs, Neural Networks) Model Evaluation and Validation Big Data Technologies (e.g., Spark, Hadoop - conceptual understanding) Database Querying (e.g., SQL) Cloud-based Data Platforms (e.g., AWS Sagemaker, Google AI Platform, Azure ML) Ethics in Data Science and AI Person Specification: Experience supporting data More ❯
to hold or gain a UK government security clearance. Preferred technical and professional experience Experience with machine learning frameworks (TensorFlow, PyTorch, scikit-learn). Familiarity with big data technologies (Hadoop, Spark). Background in data science, IT consulting, or a related field. AWS Certified Big Data or equivalent. IBM is committed to creating a diverse environment and is proud More ❯
utilising strong communication and stakeholder management skills when engaging with customers Significant experience of coding in Python and Scala or Java Experience with big data processing tools such as Hadoop or Spark Cloud experience; GCP specifically in this case, including services such as Cloud Run, Cloud Functions, BigQuery, GCS, Secret Manager, Vertex AI etc. Experience with Terraform Prior experience More ❯