to cross-functional teams, ensuring best practices in data architecture, security and cloud computing Proficiency in data modelling, ETL processes, data warehousing, distributed systems and metadata systems Utilise ApacheFlink and other streaming technologies to build real-time data processing systems that handle large-scale, high-throughput data Ensure all data solutions comply with industry standards and government regulations … not limited to EC2, S3, RDS, Lambda and Redshift. Experience with other cloud providers (e.g., Azure, GCP) is a plus In-depth knowledge and hands-on experience with ApacheFlink for real-time data processing Proven experience in mentoring and managing teams, with a focus on developing talent and fostering a collaborative work environment Strong ability to engage with More ❯
enterprise-scale Business Planning Software solutions. Your Impact Design, build, and operate platform capabilities supporting batch, streaming, and AI-driven workloads Develop resilient and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native technologies Collaborate with AI/ML teams to deploy models and enable generative AI use cases Implement integrations with data lakes and event stores to … 8+ years of hands-on experience in software engineering, especially in platform/backend systems Expert-level skills in Java and strong proficiency in Python Experience with Apache Kafka, Flink, and Pulsar for building distributed data pipelines Familiarity with scalable data storage and data lake integrations Proven ability to integrate AI/ML models and work with prompt-based More ❯
middlesbrough, yorkshire and the humber, united kingdom
Anaplan
enterprise-scale Business Planning Software solutions. Your Impact Design, build, and operate platform capabilities supporting batch, streaming, and AI-driven workloads Develop resilient and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native technologies Collaborate with AI/ML teams to deploy models and enable generative AI use cases Implement integrations with data lakes and event stores to … 8+ years of hands-on experience in software engineering, especially in platform/backend systems Expert-level skills in Java and strong proficiency in Python Experience with Apache Kafka, Flink, and Pulsar for building distributed data pipelines Familiarity with scalable data storage and data lake integrations Proven ability to integrate AI/ML models and work with prompt-based More ❯
at scale. This is a hands-on engineering role that blends software craftsmanship with data architecture expertise. Key responsibilities: Design and implement high-throughput data streaming solutions using Kafka, Flink, or Confluent. Build and maintain scalable backend systems in Python or Scala, following clean code and testing principles. Develop tools and frameworks for data governance, privacy, and quality monitoring … data use cases. Contribute to an engineering culture that values testing, peer reviews, and automation-first principles. What You'll Bring Strong experience in streaming technologies such as Kafka, Flink, or Confluent. Advanced proficiency in Python or Scala, with a solid grasp of software engineering fundamentals. Proven ability to design, deploy, and scale production-grade data platforms and backend More ❯
at scale. This is a hands-on engineering role that blends software craftsmanship with data architecture expertise. Key responsibilities: Design and implement high-throughput data streaming solutions using Kafka, Flink, or Confluent. Build and maintain scalable backend systems in Python or Scala, following clean code and testing principles. Develop tools and frameworks for data governance, privacy, and quality monitoring … data use cases. Contribute to an engineering culture that values testing, peer reviews, and automation-first principles. What You'll Bring Strong experience in streaming technologies such as Kafka, Flink, or Confluent. Advanced proficiency in Python or Scala, with a solid grasp of software engineering fundamentals. Proven ability to design, deploy, and scale production-grade data platforms and backend More ❯
at scale. This is a hands-on engineering role that blends software craftsmanship with data architecture expertise. Key responsibilities: Design and implement high-throughput data streaming solutions using Kafka, Flink, or Confluent. Build and maintain scalable backend systems in Python or Scala, following clean code and testing principles. Develop tools and frameworks for data governance, privacy, and quality monitoring … data use cases. Contribute to an engineering culture that values testing, peer reviews, and automation-first principles. What You'll Bring Strong experience in streaming technologies such as Kafka, Flink, or Confluent. Advanced proficiency in Python or Scala, with a solid grasp of software engineering fundamentals. Proven ability to design, deploy, and scale production-grade data platforms and backend More ❯