Permanent Apache Hive Jobs with Hybrid or Work from Home (WFH) Options

1 to 5 of 5 Permanent Apache Hive Jobs with Hybrid or WFH Options

Appian Software Engineer

Chicago, Illinois, United States
Hybrid / WFH Options
Request Technology - Robyn Honquest
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based more »
Employment Type: Permanent
Salary: USD 145,000 Annual
Posted:

Data Engineer

London Area, United Kingdom
Hybrid / WFH Options
Careers at MI5, SIS and GCHQ
delivering moderate-to-complex data flows as part of a development team in collaboration with others. You’ll be confident using technologies such as: Apache Kafka, Apache NiFi, SAS DI Studio, or other data integration platforms. You can implement, deliver, and translate several data models, including unstructured data … and recognised standards to build solutions using various traditional or big data languages such as: SQL, PL/SQL, SAS Macro Language, Python, Scala, Apache Spark, Java, JavaScript etc, using various tools including SAS, Hue (Hive/Impala), Kibana (Elastic Search). Knowledge of data management on Cloud more »
Posted:

Scientist 3, Data Science - 4606

Philadelphia, Pennsylvania, United States
Hybrid / WFH Options
Comcast Corporation
use Jira, Confluence, and Git in an Agile development environment; perform DevOps processes using Concourse, Docker, and Kubernetes; perform large-scale data processing using Apache Spark; manage big data on Cloudera; perform Machine Learning, including developing and deploying predictive models leveraging ML algorithms; use AWS cloud platform; deploy tools … and applications on Unix; write SQL and PL/SQL scripts in Oracle, Hive, and NoSQL databases; use Telecom Standards including ETOM, SID, FCAPS, and ITIL; build customer-centric models and optimization tools using large-scale pipelines that utilize online & offline data, structured & unstructured data, peta-bytes of data … related technical or quantitative field; and one (1) year of experience programming using Python and Scala; using Jira; performing large-scale data processing using Apache Spark; managing big data on Cloudera; performing Machine Learning; using AWS cloud platform; deploying tools and applications on Unix; and writing SQL in Hive more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Principal Backend Engineer - Python / Blockchain - UK / EU

United Kingdom
Hybrid / WFH Options
Axiom Recruit
Min 7yrs with Python Big Data & Data lake solutions; PostgreSQL, Clickhouse or SnowFlake etc Cloud Infrasutcurre (AWS services) Data processing pipelines using Kafka, Hadoop, Hive, Storm, or Zookeeper Hands-on team leadership The Reward Joining a fast-growth, successful blockchain business. The role offers fully remote work, a great more »
Posted:

Principal Backend Engineer

Nationwide, United Kingdom
Hybrid / WFH Options
Key Talent Solutions
and availability of the company's software products. Data Processing Pipelines : You'll design and implement data processing pipelines using technologies like Kafka, Hadoop, Hive, Storm, or Zookeeper, enabling real-time and batch processing of data from the blockchain. Hands-on Team Leadership : As a hands-on leader, you more »
Employment Type: Permanent
Salary: £160000 - £200000/annum Bonus, Dental, Insurance, Equity
Posted: