technology. Requirements include at least 7 years of professional software development experience, with a focus on Backend engineering. Our backend tech stack currently includes: Go Protobuf gRPC PostgreSQL Redis Kafka Docker Full CI and automated deployments using Kubernetes and Ansible to multiple cloud providers We are committed to using the best technology for the task at hand. Other technologies More ❯
scaled agile) Processes Data Integration Focused Data Pipeline Orchestration, and ELT tooling such as Apache Airflow, Apark, NiFi, Airbyte and Singer. Message Brokers, streaming data processors, such as ApacheKafka Object Storage, such as S3, MinIO, LakeFS CI/CD Pipeline, Integration, ideally Azure DevOps Python Scripting API Management Solutions Automation Key Skills Experience in the Design/Configuration More ❯
a data platform Strong ETL/ELT engineering skills Desirable Experience with Python and related tooling Understanding of MLOps practices (MLflow, Azure ML) Familiarity with real-time data technologies (Kafka, Delta Live Tables) If you're passionate about transforming the banking industry and eager to leverage your expertise to drive continuous improvement and innovation for clients then click "APPLY More ❯
a data platform Strong ETL/ELT engineering skills Desirable Experience with Python and related tooling Understanding of MLOps practices (MLflow, Azure ML) Familiarity with real-time data technologies (Kafka, Delta Live Tables) If you're passionate about transforming the banking industry and eager to leverage your expertise to drive continuous improvement and innovation for clients then click "APPLY More ❯
our clients data platform. This role is ideal for someone who thrives on building scalable data solutions and is confident working with modern tools such as Azure Databricks , ApacheKafka , and Spark . In this role, you'll play a key part in designing, delivering, and optimising data pipelines and architectures. Your focus will be on enabling robust data … hear from you !! Role and Responsibilities Designing and building scalable data pipelines using Apache Spark in Azure Databricks Developing real-time and batch data ingestion workflows, ideally using ApacheKafka Collaborating with data scientists, analysts, and business stakeholders to build high-quality data products Supporting the deployment and productionisation of machine learning pipelines Contributing to the ongoing development of … who bring strong technical skills and a hands-on approach to modern data engineering. You should have: Proven experience with Azure Databricks and Apache Spark Working knowledge of ApacheKafka and real-time data streaming Strong proficiency in SQL and Python Familiarity with Azure Data Services and CI/CD pipelines in a DevOps environment Solid understanding of data More ❯