We are seeking an experienced Kafka Real-Time Architect to design and implement scalable, high-performance real-time data processing systems leveraging Apache Kafka. In this role, you will be responsible for architecting and managing Kafka clusters, ensuring system scalability and availability, and integrating Kafka with various data processing … approach to addressing business data needs and ensuring optimal system performance. Key Responsibilities: Design & Architecture: Architect and design scalable, real-time streaming systems using Apache Kafka, ensuring they are robust, highly available, and meet business requirements for data ingestion, processing, and real-time analytics. Kafka Cluster Management: Configure, deploy … and troubleshoot issues to maintain smooth operations. Integration & Data Processing: Integrate Kafka with key data processing tools and platforms, including Kafka Streams , Kafka Connect , ApacheSpark Streaming , Apache Flink , Apache Beam , and Schema Registry . This integration will facilitate data stream processing, event-driven architectures, and More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
KPMG UK
for at least the past 5 years and being a UK national or dual UK national. Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Experience with the design, build and maintenance More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Fruition Group
and compliance. Collaborate with stakeholders and external partners. Skills & Experience: Strong experience with AWS data technologies (e.g., S3, Redshift, Lambda). Proficient in Python, ApacheSpark, and SQL. Experience in data warehouse design and data migration projects. Cloud data platform development and deployment. Expertise across data warehouse and More ❯
leeds, west yorkshire, yorkshire and the humber, united kingdom
Engie
data cycle. Proven experience working with AWS data technologies (S3, Redshift, Glue, Lambda, Lake Formation, Cloud Formation), GitHub, CI/CD. Coding experience in ApacheSpark, Iceberg, or Python (Pandas). Experience in change and release management. Experience in database warehouse design and data modeling. Experience managing data More ❯
leeds, west yorkshire, yorkshire and the humber, united kingdom Hybrid / WFH Options
KPMG UK
analytics/architecture/security using native technologies of least one cloud platform (AWS, Azure, GCP) Expertise in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Expertise in leading the design, build and More ❯
/structured data handling Ability to work independently and collaboratively in cross-functional teams Nice to have: Experience with big data tools such as Spark, Hadoop, or MapReduce Familiarity with data visualisation tools like QuickSight, Tableau, or Looker Exposure to microservice APIs and public cloud ecosystems beyond AWS AWS More ❯
Principal Data Engineer, Consulting Leeds Based You must be eligible for SC Clearance Role Overview The Principal Data Engineer will be responsible for designing and implementing cloud-based data solutions using a range of AWS services. This role involves working More ❯
to join its innovative team. This role requires hands-on experience with machine learning techniques and proficiency in data manipulation libraries such as Pandas, Spark, and SQL. As a Data Scientist at PwC, you will work on cutting-edge projects, using data to drive strategic insights and business decisions. … e.g. Sklearn) and (Deep learning frameworks such as Pytorch and Tensorflow). Understanding of machine learning techniques. Experience with data manipulation libraries (e.g. Pandas, Spark, SQL). Git for version control. Cloud experience (we use Azure/GCP/AWS). Skills we'd also like to hear about More ❯
maintaining Kafka clusters to ensure high availability and scalability. Integrating Kafka with other data processing tools and platforms such as Kafka Streams, Kafka Connect, Spark Streaming Schema Registry, Flink and Beam. Collaborating with cross-functional teams to understand data requirements and design solutions that meet business needs. Implementing security … guidance and support to development operations teams. Staying updated with the latest Kafka features, updates and industry practices. Required Skills Experience : Extensive Experience with Apache Kafka and real-time architecture including event driven frameworks. Strong Knowledge of Kafka Streams, Kafka Connect, Spark Streaming, Schema Registry, Flink and Beam. More ❯