observability frameworks, including lineage tracking, SLAs, and data quality monitoring. Familiarity with modern data lake table formats such as Delta Lake, Iceberg, or Hudi. Background in stream processing (Kafka, Flink, or similar ecosystems). Exposure to containerisation and orchestration technologies such as Docker and Kubernetes. More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
and observability stacks (lineage, data contracts, quality monitoring). Knowledge of data lake formats (Delta Lake, Parquet, Iceberg, Hudi). Familiarity with containerisation and streaming technologies (Docker, Kubernetes, Kafka, Flink). Exposure to lakehouse or medallion architectures within Databricks. More ❯
at scale. This is a hands-on engineering role that blends software craftsmanship with data architecture expertise. Key responsibilities: Design and implement high-throughput data streaming solutions using Kafka, Flink, or Confluent. Build and maintain scalable backend systems in Python or Scala, following clean code and testing principles. Develop tools and frameworks for data governance, privacy, and quality monitoring … data use cases. Contribute to an engineering culture that values testing, peer reviews, and automation-first principles. What You'll Bring Strong experience in streaming technologies such as Kafka, Flink, or Confluent. Advanced proficiency in Python or Scala, with a solid grasp of software engineering fundamentals. Proven ability to design, deploy, and scale production-grade data platforms and backend More ❯