and tooling used across the engineering teamRequirements:5-7 years of Java experienceCapital markets Front office experienceExperience working with Data lake (Hadoop) consumption, specifically Hive experienceKafka experienceRules engine experience (Ideally open source/vendor products, e.g. Drools or Camunda)Unix scripting knowledgeMarkets Regulatory/Trade control knowledgeAutomation experienceCloud experienceContainerisation more »
Min 7yrs with Python Big Data & Data lake solutions; PostgreSQL, Clickhouse or SnowFlake etc Cloud Infrasutcurre (AWS services) Data processing pipelines using Kafka, Hadoop, Hive, Storm, or Zookeeper Hands-on team leadership The Reward Joining a fast-growth, successful blockchain business. The role offers fully remote work, a great more »
and troubleshooting of cloud systems.- Operational experience running a 24x7 production infrastructure at scale.- Proficiency working with data structures, schemas, and technologies like Hadoop, Hive, Redis, and MySQL- Experience in using cloud-native services like GKE, EKS, AWS/GCP load balancing, AWS/GCP cloud storage platforms (S3 more »
delivering moderate-to-complex data flows as part of a development team in collaboration with others. You’ll be confident using technologies such as: Apache Kafka, Apache NiFi, SAS DI Studio, or other data integration platforms. You can implement, deliver, and translate several data models, including unstructured data … and recognised standards to build solutions using various traditional or big data languages such as: SQL, PL/SQL, SAS Macro Language, Python, Scala, Apache Spark, Java, JavaScript etc, using various tools including SAS, Hue (Hive/Impala), Kibana (Elastic Search). Knowledge of data management on Cloud more »
Chicago, Illinois, United States Hybrid / WFH Options
Request Technology - Robyn Honquest
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, ApacheHive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based more »
and availability of the company's software products. Data Processing Pipelines : You'll design and implement data processing pipelines using technologies like Kafka, Hadoop, Hive, Storm, or Zookeeper, enabling real-time and batch processing of data from the blockchain. Hands-on Team Leadership : As a hands-on leader, you more »