Chicago, Illinois, United States Hybrid / WFH Options
Request Technology - Robyn Honquest
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, ApacheHive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based more »
duration - 12 Months Location - Hybrid ( 2 days a week) JD : Experience of working with Streaming & Batch technology stack – Confluent Kafka, Mongdb , Streamsets, IBM CDC, Hive, Hadoop, API, Informatica, Airflow, and other similar technologies SME level skills and experience of designing/architecting test automation solutions, ability to creatively problem more »
delivering moderate-to-complex data flows as part of a development team in collaboration with others. You’ll be confident using technologies such as: Apache Kafka, Apache NiFi, SAS DI Studio, or other data integration platforms. You can implement, deliver, and translate several data models, including unstructured data … and recognised standards to build solutions using various traditional or big data languages such as: SQL, PL/SQL, SAS Macro Language, Python, Scala, Apache Spark, Java, JavaScript etc, using various tools including SAS, Hue (Hive/Impala), Kibana (Elastic Search). Knowledge of data management on Cloud more »
Min 7yrs with Python Big Data & Data lake solutions; PostgreSQL, Clickhouse or SnowFlake etc Cloud Infrasutcurre (AWS services) Data processing pipelines using Kafka, Hadoop, Hive, Storm, or Zookeeper Hands-on team leadership The Reward Joining a fast-growth, successful blockchain business. The role offers fully remote work, a great more »
and availability of the company's software products. Data Processing Pipelines : You'll design and implement data processing pipelines using technologies like Kafka, Hadoop, Hive, Storm, or Zookeeper, enabling real-time and batch processing of data from the blockchain. Hands-on Team Leadership : As a hands-on leader, you more »
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Version 1
Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics Direct experience in building data pipelines using Azure Data Factory and Apache Spark (preferably Databricks). Experience building data warehouse solutions using ETL/ELT tools such as SQL Server Integration Services (SSIS), Oracle Data Integrator … ODI), Talend, and Wherescape Red. Experience with Azure Event Hub, IOT Hub, Apache Kafka, Nifi for use with streaming data/event-based data Experience with other Open Source big data products eg Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational/NoSQL data repositories more »
in Unix and shell scripting. * Minimum of 1 year experience in investment banking or the financial sector. * Performance Tuning of Oracle/MySQL/Hive SQL Queries/Spark SQL Statements. * Experience in working with large databases - multi terabytes (3+ Terabytes). * Minimum of 5 years' experience in Big … Data Space (Hive, Impala, Spark Sql, HDFS etc). * Any cloud experience (AWS/Azure/Google/Oracle). * Solid experience with Oracle objects(Packages,Procedure,functions) * Very clear concepts on Oracle architecture. * Very strong debugging skills. * Proficient in query tuning. * Detail oriented * Strong written and verbal communication more »