of billions of edges) using DynamoDB or new enhanced capabilities Developing and operating graph traversal capabilities using data graphing tool traversal capabilities built upon Apache Gremlin or new enhanced capabilities Developing and operating NoSQL solutions to complex big data applications Data modeling for performance, partition sharding, record/event … and PySpark Building high-quality User Interface/User experiences with the React framework and webGL Designing and operating large scale graph databases using Apache Cassandra Performing in-depth technical analysis of large-scale graph databases to develop implementation strategies for search optimizations Developing technical capabilities for processing, persistence … data Specialized technologies that are optimized for the particular use of the data, such as relational databases, a NoSQL database (Cassandra), or object storage Apache, TINKERPOP, GREMLIN and/or JANUSGRAPH to design, develop, implement and maintain systems Knowledge of Graph Database to design, develop, implement and maintain systems More ❯
billions of edges) using DynamoDB or new enhanced capabilities. • Experience developing and operating graph traversal capabilities using data graphing tool traversal capabilities built upon Apache Gremlin or new enhanced capabilities. • Experience developing and operating NoSQL solutions to complex big data applications. • Experience in data modeling for performance, partition sharding … Experience building high quality User Interface/User experiences with the React framework and webGL. • Experience designing and operating large scale graph databases using Apache Cassandra. • Experience performing in-depth technical analysis of large-scale graph databases to develop implementation strategies for search optimizations. • Experience developing technical capabilities for … technologies that are optimized for the particular use of the data, such as relational databases, a NoSQL database (Cassandra), or object storage. • Experience with Apache, TINKERPOP, GREMLIN and/or JANUSGRAPH to design, develop, implement and maintain system. • Knowledge of Graph Database to design, develop, implement and maintain system. More ❯
use big data for good. Qualifications You Have: 3+ years of experience using Python, SQL, and PySpark 3+ years of experience utilizing Databricks or Apache Spark Experience designing and maintaining Data Lakes or Data Lakehouses Experience with big data tools such as Spark, NiFi, Kafka, Flink, or others at … multi-petabyte scale Expertise in designing and maintaining ETL/ELT data pipelines utilizing storage/serialization formats/schemas such as Parquet and Avro Experience administrating and maintaining data science workspaces and tool benches for Data Scientists and Analysts. Secret Clearance HS diploma or GED DoD8570 IAT II … Compliance Certification (Such as Security+, CCNA Security, GSEC, etc.) Nice if You Have: Experience with Apache NiFi, multi-cluster or containerized environment experience preferred Knowledge of cybersecurity concepts, including threats, vulnerabilities, security operations, encryption, boundary defense, auditing, authentication, and supply chain risk management Experience applications, appliances, or machines aligned More ❯
applicable technologies. - Experience in Cloud and Distributed Computing Information Retrieval: at least one or a combination of several of the following areas - HDFS, HBASE, Apache Lucene, Apache Solr, MongoDB - Experience with ingesting, Parsing and Analysis of Disparate Data-sources and formats: XML, JSON, CSV, Binary Formats, Sequence or … Map Files, Avro and related technologies - Knowledge of aspect Oriented Design and Development - Capable of debugging and Profiling Cloud and Distributed Installations: Java Virtual Machine (JVM) memory management, Profiling Java Applications UNIX/LINUX, CentOS Experience in SIGINT: - Experience with at least one SIGINT collection discipline areas (FORNSAT,CABLE More ❯
Required Qualifications: 12+ years of experience in data architecture, cloud computing and real-time data processing. Hands-on experience with Apache Kafka (Confluent), Cassandra etc and related technologies. Strong expertise in GCP. Realtime services experience using GCP services like Pub/Sub, Cloud Functions, Datastore, and Cloud Spanner. Experience … with message queues (e.g., RabbitMQ) and event-driven patterns. Hands-on experience with data serialization formats (e.g., Avro, Parquet, JSON) and schema registries. Strong understanding of DevOps and CI/CD pipelines for data streaming solutions. Familiarity with containerization and orchestration tools Excellent communication and leadership skills, with experience More ❯
QUALIFICATIONS - Implementation experience with AWS services - Hands on experience leading large-scale global data warehousing and analytics projects. - Experience using some of the following: Apache Spark/Hadoop ,Flume, Kinesis, Kafka, Oozie, Hue, Zookeeper, Ranger, Elasticsearch, Avro, Hive, Pig, Impala, Spark SQL, Presto, PostgreSQL, Amazon EMR,Amazon Redshift More ❯
knowledge of warehousing and ETLs. Extensive knowledge of popular database providers such as SQL Server, PostgreSQL, Teradata and others. • Proficiency in technologies in the Apache Hadoop ecosystem, especially Hive, Impala and Ranger • Experience working with open file and table formats such Parquet, AVRO, ORC, Iceberg and Delta Lake More ❯