such as Teradata Oracle, SAP BW and migration of these data warehouses to modern cloud data platforms. Deep understanding and hands-on experience with big data technologies like Hadoop, HDFS, Hive, Spark and cloud data platform services. Proven track record of designing and implementing large-scale data architectures in complex environments. CICD/DevOps experience is a plus. Skills: Strong More ❯
Tech You’ll Work With This business doesn’t do “just one stack”. You’ll be expected to work across a broad tech landscape: Big Data & Distributed Systems: HDFS, Hadoop, Spark, Kafka Cloud: Azure or AWS Programming: Python, Java, Scala, PySpark – you’ll need two or more, Python preferred Data Engineering Tools: Azure Data Factory, Databricks, Delta Lake, Azure More ❯
experience Preferred Qualifications: Bachelor's degree in related field preferred Windows 7/10, MS Project Apache Airflow Python, Java, JavaScript, React, Flask, HTML, CSS, SQL, R, Docker, Kubernetes, HDFS, Postgres, Linux AutoCAD JIRA, Gitlab, Confluence About Us: IntelliBridge delivers IT strategy, cloud, cybersecurity, application, data and analytics, enterprise IT, intelligence analysis, and mission operation support services to accelerate technical More ❯
related field preferred Active TS/SCI Required Preferred Qualifications: Windows 7/10, MS Project Apache Airflow Python, Java, JavaScript, React, Flask, HTML, CSS, SQL, R, Docker, Kubernetes, HDFS, Postgres, Linux AutoCAD JIRA, Gitlab, Confluence Also looking for a Senior Developer at a higher compensation More ❯
Description- Spark - Must have Scala - Must Have Hive/SQL - Must Have Job Description : Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies , real time data processing platform(Spark Streaming) experience would be an advantage. • Consistently demonstrates More ❯
Description- Spark - Must have Scala - Must Have Hive/SQL - Must Have Job Description : Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies , real time data processing platform(Spark Streaming) experience would be an advantage. • Consistently demonstrates More ❯
have demonstrated work experience with the Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc.; Shall have demonstrated work experience with the HadoopDistributedFileSystem (HDFS); Shall have demonstrated work experience with Serialization such as JSON and/or BSON More ❯
clearance with a poly is required. You could also have this Experience using the Atlassian Tool Suite. Experience with development of any of the following; Hadoop, Pig, MapReduce, or HDFS Working knowledge with other object-oriented programming languages such as Java or C++ Working knowledge with Front-end data visualization libraries (i.e., D3.js; Raphael.js, etc.) Salary Range More ❯
relevant solutions to ensure design constraints are met by the software team Ability to initiate and implement ideas to solve business problems Preferred qualifications, capabilities, and skills Knowledge of HDFS, Hadoop, Databricks Knowledge of Airflow, Control-M Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker Familiarity with troubleshooting common networking technologies and issues About Us J.P. More ❯
Table. Shall have demonstrated work experience with the Map Reduce programming model and technologiessuch as Hadoop, Hive, Pig. Shall have demonstrated work experience with the HadoopDistributedFileSystem (HDFS). Shall have demonstrated work experience with Serialization such as JSON and/or BSON. Shall have demonstrated work experience developing Restful services.8.Shall have at least three (3) years' experience More ❯
have demonstrated work experience with the Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc; Shall have demonstrated work experience with the HadoopDistributedFileSystem (HDFS); Shall have demonstrated work experience with Serialization such as JSON and/or BSON Position requires active Security Clearance with appropriate Polygraph Pay Range: 205,000-255,000 The RealmOne More ❯
Angular) Experience with DevOps and automation tools (e.g. Docker, Ansible, Gitlab) Experience with databases, both relational and nosql (e.g. PostgreSQL, Elasticsearch) Experience with big data technologies (e.g. Spark, DeltaLake, HDFS, YARN) Must be willing to work onsite in Fort Liberty, (Fayetteville, N.C. More ❯
Store (NoSQL) such as Hbase, CloudBase/Acumulo, Big Table, etc. o Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc. o HadoopDistributedFileSystem (HDFS) o Serialization such as JSON and/or BSON • 4 years of SWE experience may be substituted for a bachelor's degree. • TS/SCI Clearance Required Salary between More ❯
compliance • Collaborate within cross-functional Integrated Product Teams (IPTs) to drive system integration and ensure mission success • Research and implement distributed storage, routing and querying algorithms, leveraging technologies like HDFS, Hadoop, HBase, and Accumulo (BigTable) Desired Qualifications: • Strong background in network systems engineering, with a clear understanding of data routing, security, and optimization • Experience in the integration of COTS and More ❯