Expert knowledge of SQL and/or NoSQL database technologies Expert knowledge of various messaging protocols and technologies such as REST, HTTP/S, AMQP, WebSocket Expert knowledge of Confluent Kafka Experience and good understanding of core technologies provided by GCP/AWS, such as S3, FSX, EKS, SQS, SNS, Kinesis, AmazonMQ, DynamoDB, GKE, CloudStorage, PubSub, Filestore, Knowledge of modern More ❯
our systems are trusted, reliable and available . The technology underpinning these capabilities includes industry leading data and analytics products such as Snowflake, Tableau, DBT, Talend, Collibra, Kafka/Confluent , Astronomer/Airflow , and Kubernetes . This forms part of a longer-term strategic direction to implement Data Mesh, and with it establish shared platform s that enables a connected More ❯
Salisbury, Wiltshire, South West, United Kingdom Hybrid / WFH Options
Anson Mccade
directly with senior stakeholders to influence architectural decisions and standards You'll bring: Proven experience as a Lead Data Architect or Senior Data Solutions Architect Expertise in Kafka/Confluent , Databricks , Unity Catalog , and modern data lake/lakehouse architectures Strong grasp of cloud data platforms (AWS, Azure, GCP, Snowflake) Understanding of Data Mesh , Data Fabric , and data product-centric More ❯
thinking company where data is central to strategic decision-making. We’re looking for someone who brings hands-on experience in streaming data architectures, particularly with Apache Kafka and Confluent Cloud, and is eager to shape the future of scalable, real-time data pipelines. You’ll work closely with both the core Data Engineering team and the Data Science function … bridging the gap between model development and production-grade data infrastructure. What You’ll Do: Design, build, and maintain real-time data streaming pipelines using Apache Kafka and Confluent Cloud. Architect and implement robust, scalable data ingestion frameworks for batch and streaming use cases. Collaborate with stakeholders to deliver high-quality, reliable datasets to live analytical platforms and machine learning … What We’re Looking For: 5 years of experience in a Data Engineering or related role. Strong experience with streaming technologies such as Kafka, Kafka Streams, and/or Confluent Cloud (must-have). Solid knowledge of Apache Spark and Databricks. Proficiency in Python for data processing and automation. Familiarity with NoSQL technologies (e.g., MongoDB, Cassandra, or DynamoDB). Exposure More ❯