Web Services (S3, Lambda, Glue, API Gateway, Kinesis, IAM) Integrations (Email, SFTP, API, Webhooks, Streaming) Data Formats and Structures (XML, Excel, CSV, TSV, JSON, AVRO, Parquet) Qualifications Basic Requirements: Self-Starter: Ability to take initiative and work independently. Confident Speaker: Strong communication skills, comfortable presenting and discussing ideas. Technically More ❯
Databases, and/or Data Integration (ETL) or API development. Knowledge of some Data Formats such as JSON, XML and binary formats such as Avro or Google Protocol Buffers. Experience collaborating with business and technical teams in order to understand, translate, review and playback requirements and collaborate to develop More ❯
support for OpenLink and processes developed by the group Participate in capacity planning and performance/throughput analysis Consuming and publishing transaction data in AVRO over Kafka Automation of system maintenance tasks, end-of-day processing jobs, data integrity checks and bulk data loads/extracts Release planning and More ❯
including multi-threading, concurrency, etc. Fluency in C++ and/or Java. Experience working with text or semi-structured data (i.e. JSON, XML, ORC, Avro, Parquet, etc.). BS in Computer Science or a related field; Masters or PhD preferred. Snowflake is growing fast, and we're scaling our More ❯
technologies, particularly within the Kafka ecosystem. What Gives You an Edge: Extensive experience with modern data architectures, including Data Warehouses, Lakehouses, data formats (e.g., Avro, Parquet), and cloud-native platforms (AWS, GCP, Azure). Expertise in integrating Kafka-based solutions with cloud services and enterprise data ecosystems. Demonstrated success More ❯
scoping and delivering customer proposals aligned with Analytics Solutions. Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, Avro, Parquet, Iceberg, Hudi). Experience developing software and data engineering code in one or more programming languages (Java, Python, PySpark, Node, etc). AWS More ❯
Required Qualifications: 12+ years of experience in data architecture, cloud computing and real-time data processing. Hands-on experience with Apache Kafka (Confluent), Cassandra etc and related technologies. Strong expertise in GCP. Realtime services experience using GCP services like Pub/Sub, Cloud Functions, Datastore, and Cloud Spanner. Experience … with message queues (e.g., RabbitMQ) and event-driven patterns. Hands-on experience with data serialization formats (e.g., Avro, Parquet, JSON) and schema registries. Strong understanding of DevOps and CI/CD pipelines for data streaming solutions. Familiarity with containerization and orchestration tools Excellent communication and leadership skills, with experience More ❯
QUALIFICATIONS - Implementation experience with AWS services - Hands on experience leading large-scale global data warehousing and analytics projects. - Experience using some of the following: Apache Spark/Hadoop ,Flume, Kinesis, Kafka, Oozie, Hue, Zookeeper, Ranger, Elasticsearch, Avro, Hive, Pig, Impala, Spark SQL, Presto, PostgreSQL, Amazon EMR,Amazon Redshift More ❯