of our clients data platform. This role is ideal for someone who thrives on building scalable data solutions and is confident working with modern tools such as Azure Databricks , Apache Kafka , and Spark . In this role, you'll play a key part in designing, delivering, and optimising data pipelines and architectures. Your focus will be on enabling … and want to make a meaningful impact in a collaborative, fast-paced environment, we want to hear from you !! Role and Responsibilities Designing and building scalable data pipelines using ApacheSpark in Azure Databricks Developing real-time and batch data ingestion workflows, ideally using Apache Kafka Collaborating with data scientists, analysts, and business stakeholders to build high … and Experience We're seeking candidates who bring strong technical skills and a hands-on approach to modern data engineering. You should have: Proven experience with Azure Databricks and ApacheSpark Working knowledge of Apache Kafka and real-time data streaming Strong proficiency in SQL and Python Familiarity with Azure Data Services and CI/CD pipelines More ❯
Portsmouth, Hampshire, England, United Kingdom Hybrid / WFH Options
Computappoint
/Starburst Enterprise/Galaxy administration and CLI operations Container Orchestration : Proven track record with Kubernetes/OpenShift in production environments Big Data Ecosystem : Strong background in Hadoop, Hive, Spark, and cloud platforms (AWS/Azure/GCP) Systems Architecture : Understanding of distributed systems, high availability, and fault-tolerant design Security Protocols : Experience with LDAP, Active Directory, OAuth2, and More ❯
architecture, integration, governance frameworks, and privacy-enhancing technologies Experience with databases (SQL & NoSQL - Oracle, PostgreSQL, MongoDB), data warehousing, and ETL/ELT tools Familiarity with big data technologies (Hadoop, Spark, Kafka), cloud platforms (AWS, Azure, GCP), and API integrations Desirable: Data certifications (TOGAF, DAMA), government/foundational data experience, cloud-native platforms knowledge, AI/ML data requirements understanding More ❯
modern data platforms (Databricks, Snowflake, Kafka), container orchestration (Kubernetes/OpenShift), and multi-cloud deployments across AWS, Azure, GCP Advanced knowledge of Big Data ecosystems (Hadoop/Hive/Spark), data lakehouse architectures, mesh topologies, and real-time streaming platforms Strong Unix/Linux skills, database connectivity (JDBC/ODBC), authentication systems (LDAP, Active Directory, OAuth2), and data integration More ❯