similar tools Leading on solution deployment using infrastructure-as-code and CI/CD practices Transforming diverse data formats including JSON, XML, CSV, and Parquet Creating and maintaining clear technical documentation, metadata, and data dictionaries Your previous experience as Principal Data Engineer will include: Strong background across AWS data More ❯
cleanse data using a range of tools and techniques. Manage and process structured and semi-structured data formats such as JSON, XML, CSV, and Parquet . Operate effectively in Linux and cloud-based environments . Support CI/CD processes and adopt infrastructure-as-code principles. Contribute to a More ❯
role in scaling their innovative platform. What You’ll Be Doing: Maintaining and evolving a mature, compute-heavy platform built in C#, DuckDB, and Parquet, hosted via Docker and Azure. Collaborating closely with software developers, data scientists, and product teams to deliver a powerful, real-time data engine. Contributing More ❯
multi-threading, concurrency, etc. Fluency in C++ and/or Java. Experience working with text or semi-structured data (i.e. JSON, XML, ORC, Avro, Parquet, etc.). BS in Computer Science or a related field; Masters or PhD preferred. Snowflake is growing fast, and we're scaling our team More ❯