management and associated tools such as Git/Bitbucket. Experience in the use of CI/CD tools such as Jenkins or an understanding of their role. Experience with ApacheSpark or Hadoop. Experience in building data pipelines. Experience of designing warehouses, ETL pipelines and data modelling. Good knowledge in designing, building, using, and maintaining REST APIs. Good More ❯
of a forward-thinking company where data is central to strategic decision-making. We’re looking for someone who brings hands-on experience in streaming data architectures, particularly with Apache Kafka and Confluent Cloud, and is eager to shape the future of scalable, real-time data pipelines. You’ll work closely with both the core Data Engineering team and … the Data Science function, bridging the gap between model development and production-grade data infrastructure. What You’ll Do: Design, build, and maintain real-time data streaming pipelines using Apache Kafka and Confluent Cloud. Architect and implement robust, scalable data ingestion frameworks for batch and streaming use cases. Collaborate with stakeholders to deliver high-quality, reliable datasets to live … experience in a Data Engineering or related role. Strong experience with streaming technologies such as Kafka, Kafka Streams, and/or Confluent Cloud (must-have). Solid knowledge of ApacheSpark and Databricks. Proficiency in Python for data processing and automation. Familiarity with NoSQL technologies (e.g., MongoDB, Cassandra, or DynamoDB). Exposure to machine learning pipelines or close More ❯