technology. Requirements include at least 7 years of professional software development experience, with a focus on Backend engineering. Our backend tech stack currently includes: Go Protobuf gRPC PostgreSQL Redis Kafka Docker Full CI and automated deployments using Kubernetes and Ansible to multiple cloud providers We are committed to using the best technology for the task at hand. Other technologies More ❯
scaled agile) Processes Data Integration Focused Data Pipeline Orchestration, and ELT tooling such as Apache Airflow, Apark, NiFi, Airbyte and Singer. Message Brokers, streaming data processors, such as ApacheKafka Object Storage, such as S3, MinIO, LakeFS CI/CD Pipeline, Integration, ideally Azure DevOps Python Scripting API Management Solutions Automation Key Skills Experience in the Design/Configuration More ❯
Southampton, Hampshire, England, United Kingdom Hybrid / WFH Options
Spectrum IT Recruitment
Proven problem-solving ability and a track record of meeting deadlines Excellent communication skills for cross-team collaboration Desirable skills (not essential, but a big plus): SSAS, SSRS, SSIS Kafka, MSK, Snowflake, Aurora DB, SNS AWS or Azure database management If you're ready to join a company that challenges limits, delivers excellence, and offers a truly rewarding career More ❯
areas Qualifications and Requirements 5+ years in senior data architecture roles with expertise in distributed systems, high availability, and enterprise leadership experience Proficiency in modern data platforms (Databricks, Snowflake, Kafka), container orchestration (Kubernetes/OpenShift), and multi-cloud deployments across AWS, Azure, GCP Advanced knowledge of Big Data ecosystems (Hadoop/Hive/Spark), data lakehouse architectures, mesh topologies More ❯
integration, governance frameworks, and privacy-enhancing technologies Experience with databases (SQL & NoSQL - Oracle, PostgreSQL, MongoDB), data warehousing, and ETL/ELT tools Familiarity with big data technologies (Hadoop, Spark, Kafka), cloud platforms (AWS, Azure, GCP), and API integrations Desirable: Data certifications (TOGAF, DAMA), government/foundational data experience, cloud-native platforms knowledge, AI/ML data requirements understanding, data More ❯
a data platform Strong ETL/ELT engineering skills Desirable Experience with Python and related tooling Understanding of MLOps practices (MLflow, Azure ML) Familiarity with real-time data technologies (Kafka, Delta Live Tables) If you're passionate about transforming the banking industry and eager to leverage your expertise to drive continuous improvement and innovation for clients then click "APPLY More ❯
a data platform Strong ETL/ELT engineering skills Desirable Experience with Python and related tooling Understanding of MLOps practices (MLflow, Azure ML) Familiarity with real-time data technologies (Kafka, Delta Live Tables) If you're passionate about transforming the banking industry and eager to leverage your expertise to drive continuous improvement and innovation for clients then click "APPLY More ❯
Fareham, Hampshire, South East, United Kingdom Hybrid / WFH Options
Richmond Square Consulting Limited
side and remotely). EUD provisioning, maintenance, patching and hardening SSO concepts and operation Understanding of working in protectively marked environments Any experience with DevOps, Docker, Kubernetes, Apache (Nifi, Kafka) would be a nice to have, but non essential Current and active SC Clearance (willingness to undergo DV at some point may be advantageous) If you feel that you More ❯
Monitoring and Log Analytics - Mentoring/Leading/Management experience initially with a small team but with a view that this will grow - Ideally skills in Delta Live Tables, Kafka, Azure Stream Analytics, Azure ML, PowerBI and Financial Modelling experience. More ❯
our clients data platform. This role is ideal for someone who thrives on building scalable data solutions and is confident working with modern tools such as Azure Databricks , ApacheKafka , and Spark . In this role, you'll play a key part in designing, delivering, and optimising data pipelines and architectures. Your focus will be on enabling robust data … hear from you !! Role and Responsibilities Designing and building scalable data pipelines using Apache Spark in Azure Databricks Developing real-time and batch data ingestion workflows, ideally using ApacheKafka Collaborating with data scientists, analysts, and business stakeholders to build high-quality data products Supporting the deployment and productionisation of machine learning pipelines Contributing to the ongoing development of … who bring strong technical skills and a hands-on approach to modern data engineering. You should have: Proven experience with Azure Databricks and Apache Spark Working knowledge of ApacheKafka and real-time data streaming Strong proficiency in SQL and Python Familiarity with Azure Data Services and CI/CD pipelines in a DevOps environment Solid understanding of data More ❯