for data processing. Strong knowledge of SQL and relational databases (e.g., MySQL, PostgreSQL, MS SQL Server). Experience with NoSQL databases (e.g., MongoDB, Cassandra, HBase). Familiarity with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake). Hands-on experience with ETL frameworks and tools (e.g., Apache NiFi More ❯
in writing and optimizing SQL Knowledge of AWS services including S3, Redshift, EMR, Kinesis and RDS Experience with Open Source Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) Ability to write code in Python, Ruby, Scala or other platform-related Big data technology Knowledge of professional software engineering practices More ❯
cloud, artificial intelligence, machine learning, mobile, etc.) Preferred qualifications, capabilities, and skills Knowledge of AWS Knowledge of Databricks Understanding of Cloudera Hadoop, Spark, HDFS, HBase, Hive. Understanding of Maven or Gradle About the Team J.P. Morgan is a global leader in financial services, providing strategic advice and products to More ❯
deeplearing4j, Torch, Tensor Flow, Caffe, Neon, NVIDIA CUDA Deep Neural Network library (cuDNN), and OpenCV) and distributed data processing frameworks (e.g. Hadoop (including HDFS, Hbase, Hive, Impala, Giraph, Sqoop), Spark (including MLib, GraphX, SQL and Dataframes). Execute data science method using common programming/scripting languages: Python, Java More ❯
Linux-based systems. Database experience with PostgreSQL or other relational databases. Experience with Docker, Kubernetes, Helm, or other containerization tools. Familiarity with Kafka, Hadoop, HBase, or cloud-based big data solutions. Understanding of geospatial data, data fusion, and machine learning. Experience supporting Intelligence Community and DoD mission sets. CompTIA More ❯
SITE POSITION Required Skills may include: Experience with Atlassian Software products (JIRA, Confluence, Service Desk, etc.) Maintaining the Apache Hadoop Ecosystem, especially u?lizing HBase, MapReduce, and Spark. ETL processes u?lizing Linux shell scrip?ng, Perl, Python, and Apache Airflow. AWS services such as CloudWatch, CloudTrail, ELB, EMR More ❯
Boot, Nifi, AWS, python, scala, shell scripting, and XML processing Experience in AWS solu?on architecture Maintaining the Apache Hadoop Ecosystem, especially u?lizing HBase, MapReduce, and Spark. ETL processes u?lizing Linux shell scrip?ng, Perl, Python, and Apache Airflow. AWS services such as CloudWatch, CloudTrail, ELB, EMR More ❯
highly desired but not mandatory to perform the work, include: working with big data processing and NoSQL databases such as MongoDB, ElasticSearch, MapReduce, and HBase with Apache NiFi with the Extract, Transform, and Load (ETL) processes managing and mitigating IT security vulnerabilities using Plans of Actions and Milestones (POAMs More ❯
operating systems: security, configuration, and management Database design, setup, and administration (DBA) experience with Sybase, Oracle, or UDB Big data systems: Hadoop, Snowflake, NoSQL, HBase, HDFS, MapReduce Web and Mobile technologies, digital workflow tools Site reliability engineering and runtime operational tools (agent-based technologies) and processes (capacity, change and More ❯
Ansible, Swagger, Git, Subversion, Maven, Jenkins, Gradle, Nexus, Eclipse, IntelliJ, Ext-Js, JQuery, and D3. Cloud technologies: Pig, Hive, Apache Spark, Azure DataBricks, Storm, HBase, Hadoop Distributed File System, and MapReduce Open-source virtual machines and Cloud-based This position is contingent on funding and may not be filled More ❯
GitHub, GitLab, or BitBucket. Experience with drafting and maintenance of technical documentation Experience with RDBMS (ex: Oracle, Postgres, MySQL) and non-SQL DBs (ex: HBase, Accumulo, MongoDB, Neo4J) methodologies Familiarity with Cloud technologies such as Azure, AWS, or GCP. Experience working in a classified environment on operational networks such More ❯
databases such as MySQL and MariaDB; use tools like PGAdmin for data management • Engage with big data tools such as Apache Hadoop, Kafka, HDFS, HBase, and Zookeeper Required Skills: • Strong experience with Java and Apache NiFi is required • Proven ability to develop and support complex backend systems and data More ❯
or related work. -Proficiency in Java, AWS, Python, Apache Spark, Linux, Git, Maven, and Docker. -Experience maintaining an Apache Hadoop Ecosystem using tools like HBase, MapReduce, and Spark. -Knowledge of ETL processes utilizing Linux shell scripting, Perl, Python, and Apache Airflow. -Experience with AWS services such as CloudWatch, CloudTrail More ❯
rotation. BASIC QUALIFICATIONS - Good depth of understanding in Hadoop Administration, support and troubleshooting (Any two applications: Apache Spark, Apache Hive, Presto, Map-Reduce, Zookeeper, HBASE, HDFS and Pig.) - Good understanding of Linux and Networking concepts - Intermediate programming/scripting skills. Ideally in Java or Python, but will consider experience More ❯
user case support as needed Provide dataset analysis, development, error handling, version upgrades, and API enhancement Maintaining the Apache Hadoop Ecosystem, especially u?lizing HBase, MapReduce, and Spark. ETL processes u?lizing Linux shell scrip?ng, Perl, Python, and Apache Airflow. AWS services such as CloudWatch, CloudTrail, ELB, EMR More ❯
Python, and/or shell scripting Javascript development experience with Angular, React, ExtJS and/or Node.js Experience with distributed computing technologies including Hadoop, HBase, Cassandra, Elasticsearch and Apache Spark a plus Hands-on experience working with Elastic Search, Mongo DB, Node, Hadoop, Map Reduce, Spark, Rabbit MQ, and More ❯
five (5) years, a minimum of three (3) years experience with OpenSource (NoSQL) products that support highly distributed, massively parallel computation needs such as Hbase, CloudBase/Accumulo, and/or Big Table Demonstrated experience developing Restful services What we'd like you to have Demonstrated experience with Ruby More ❯
GitHub, GitLab, or BitBucket. Experience with drafting and maintenance of technical documentation. Experience with RDBMS (ex: Oracle, Postgres, MySQL) and non-SQL DBs (ex: HBase, Accumulo, MongoDB, Neo4J) methodologies Familiarity with Cloud technologies such as Azure, AWS, or GCP. Experience working in a classified environment on operational networks such More ❯
Python, and Golang. Supporting experience to execute against database technologies such as PostgreSQL. Supporting experience to execute against cloud technologies such as Hadoop, Kafka, HBase, Accumulo. Experienced with full software lifecycle development. PREFERRED SKILLS AND QUALIFICATIONS: CI/CD pipelines and tooling (Gitlab CI/CD, ArgoCD, CircleCI, Jenkins More ❯
CD deployments, monitoring and alerting (Grafana, Prometheus), Kubernetes-based deployments, infrastructure as code and provisioning tools (Terraform, Saltstack) Experience with NoSql technologies (e.g. Cassandra, HBase) will be considered a plus A sharp eye for security related issues and performance bottlenecks Agile/Scrum project management methodologies At Adobe, you More ❯
Python, and/or shell scripting • Javascript development experience with Angular, React, ExtJS and/or Node.js • Experience with distributed computing technologies including Hadoop, HBase, Cassandra, Elasticsearch and Apache Spark a plus • Hands-on experience working with Elastic Search, Mongo DB, Node, Hadoop, Map Reduce, Spark, Rabbit MQ, and More ❯
Python, and/or shell scripting • Javascript development experience with Angular, React, ExtJS and/or Node.js • Experience with distributed computing technologies including Hadoop, HBase, Cassandra, Elasticsearch and Apache Spark a plus • Hands-on experience working with Elastic Search, Mongo DB, Node, Hadoop, Map Reduce, Spark, Rabbit MQ, and More ❯
Python, and/or shell scripting Javascript development experience with Angular, React, ExtJS and/or Node.js Experience with distributed computing technologies including Hadoop, HBase, Cassandra, Elasticsearch and Apache Spark a plus Hands-on experience working with Elastic Search, Mongo DB, Node, Hadoop, Map Reduce, Spark, Rabbit MQ, and More ❯
bachelor's degree. Discretionary Requirements -Cloud Experience: Shall have three (3) years demonstrated work experience with distributed scalable Big Data Store (NoSQL) such as Hbase, CloudBase/Acumulo, Big Table, etc.; Shall have demonstrated work experience with the Map Reduce programming model and technologies such as Hadoop, Hive, Pig More ❯