with Linux-based systems. Database experience with PostgreSQL or other relational databases. Experience with Docker, Kubernetes, Helm, or other containerization tools. Familiarity with Kafka, Hadoop, HBase, or cloud-based big data solutions. Understanding of geospatial data, data fusion, and machine learning. Experience supporting Intelligence Community and DoD mission sets. More ❯
Strong communication and collaboration skills. Azure certifications such as Azure Data Engineer Associate or Azure Solutions Architect Expert. Experience with big data technologies like Hadoop, Spark, or Databricks. Familiarity with machine learning and AI concepts. If you encounter any suspicious mail, advertisements, or persons who offer jobs at Wipro More ❯
days/week. Flexibility is key to accommodate any schedules changes per the customer. Preferred Requirements Experience with big data technologies like: Hadoop, Spark, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers EKS, Diode, CI/CD, and Terraform are a plus Work could possibly require More ❯
DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting languages (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best More ❯
Columbia, Maryland, United States Hybrid / WFH Options
Enlighten, an HII - Mission Technologies Company
of the time to customer sites located in Hawaii. Subject to change based on customer needs. Preferred Requirements Experience with big data technologies like: Hadoop, Spark, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers EKS, Diode, CI/CD, and Terraform are a plus Work could More ❯
one of the listed location options. Basic Qualifications (Required Skills/Experience): • 5+ years of experience do you have with Big Data technologies (e.g. Hadoop, MapReduce, Hive, Pig, Spark • Experience with database technologies, such as SQL Server, Teradata, Postgres, MySQL, and/or NoSQL technologies (ElasticSearch, MarkLogic, Mongo) • Experience More ❯
The role We are looking for a Data Engineer to join the Data Science & Engineering team in London. Working at WGSN Together, we create tomorrow A career with WGSN is fast-paced, exciting and full of opportunities to grow and More ❯
dimensional, relational, and Document data lineage and recommend improvements for data ownership and stewardship. Qualifications Programming: Python, SQL, Scala, Java. Big Data: Apache Spark, Hadoop, Databricks, Snowflake, etc. Cloud: AWS (Glue, Redshift), Azure (Synapse, Data Factory More ❯
will be deployed You have experience in database technologies including writing complex queries against their (relational and non-relational) data stores (e.g. Postgres, ApacheHadoop, Elasticsearch, Graph databases), and designing the database schemas to support those queries You have a good understanding of coding best practices & design patterns and More ❯
Herndon, Virginia, United States Hybrid / WFH Options
The DarkStar Group
learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA and More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
The DarkStar Group
learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA and More ❯
Herndon, Virginia, United States Hybrid / WFH Options
The DarkStar Group
learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA and More ❯
or private clouds including AWS Experience in distributed cache systems like Apache Ignite or Redis Experience in big data platforms and technologies such as Hadoop, Hive, HDFS, Presto/Starburst, Spark, and Kafka Experience in Spring Framework and Cloud Computing for both batch and real-time high volume data More ❯
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
frameworks like TensorFlow, Keras, or PyTorch. Knowledge of data analysis and visualization tools (e.g., Pandas, NumPy, Matplotlib). Familiarity with big data technologies (e.g., Hadoop, Spark). Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a team. Preferred Qualifications: Experience with More ❯
analytic tools, environments, and data structures. Preferred qualifications: Experience in developing and troubleshooting data processing algorithms and software using Python, Java, Scala, Spark and hadoop frameworks. Experience with encryption techniques like symmetric, asymmetric, HSMs, and envelop, and ability to implement secure key storage using Key Management System. Experience in More ❯
Unix/Linux environments and scripting • Familiar with data visualisation tools (e.g. QuickSight, Tableau, Looker, QlikSense) Desirable: • Experience with large-scale data technologies (Spark, Hadoop) • Exposure to microservices/APIs for data delivery • AWS certifications (e.g. Solutions Architect, Big Data Specialty) • Interest or background in Machine Learning This is More ❯
Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi More ❯
for experience with contributions to maintaining systems engineering artifacts adhering to DoDAF. Experience with cross domain methodologies. Familiarity with horizontally-scalable frameworks such as Hadoop, Docker, Kubernetes and Kafka. Target salary range: $120,001 - $160,000. The estimate displayed represents the typical salary range for this position based on More ❯
willing/able to help open/close the workspace during regular business hours as needed Preferred Requirements Experience with big data technologies like: Hadoop, Spark, MongoDB, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers and Kubernetes are a plus Compensation At IAMUS Consulting, we're building More ❯
a hybrid environment. On average 1-2 days per week with ability to flex if needed. Preferred Requirements Experience with big data technologies like: Hadoop, Spark, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers EKS, Diode, CI/CD, and Terraform are a plus Compensation At More ❯
MD or San Antonio, TX. Flexibility is key to accommodate any schedules changes per the customer. Preferred Requirements Experience with big data technologies like: Hadoop, Spark, MongoDB, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers and Kubernetes are a plus Work could possibly require some on-call More ❯
MD or San Antonio, TX. Flexibility is key to accommodate any schedules changes per the customer. Preferred Requirements Experience with big data technologies like: Hadoop, Spark, MongoDB, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers and Kubernetes are a plus Work could possibly require some on-call More ❯