matter expertise in data processing and reporting. In this role, you will own the reliability, performance, and operational excellence of our real-time and batch data pipelines built on AWS, Apache Flink, Kafka, and Python. You'll act as the first line of defense for data-related incidents , rapidly diagnose root causes, and implement resilient solutions that keep critical … as on-call escalation for data pipeline incidents, including real-time stream failures and batch job errors. Rapidly analyze logs, metrics, and trace data to pinpoint failure points across AWS, Flink, Kafka, and Python layers. Lead post-incident reviews: identify root causes, document findings, and drive corrective actions to closure. Reliability & Monitoring Design, implement, and maintain robust observability for … capacity planning, scaling policies, and disaster-recovery drills for stream and batch environments. Architecture & Automation Collaborate with data engineering and product teams to architect scalable, fault-tolerant pipelines using AWS services (e.g., Step Functions , EMR , Lambda , Redshift ) integrated with Apache Flink and Kafka . Troubleshoot & Maintain Python -based applications. Harden CI/CD for data jobs: implement automated More ❯
includes defining data flows, modelling entity relationships, and contributing to high-level design documentation. You must bring experience working with government or public sector organisations, a deep understanding of AWS data services, and the ability to communicate clearly with both technical and non-technical stakeholders. Key Responsibilities * Collaborate with Technical Architects (TAs) to define and document high-level architectural … data designs. * Create and maintain end-to-end data flow diagrams, logical data models, and entity relationship diagrams (ERDs). * Design scalable, secure, and robust data architectures on AWS, incorporating services such as S3, Glue, Redshift, RDS, and Lambda. * Work with stakeholders to understand business and data requirements and translate them into architectural blueprints. * Ensure data architecture complies with … providing input into architecture governance and assurance processes. Skills and Experience: * Experience working with UK Government departments or public sector bodies. * Strong experience designing and implementing data architectures on AWS platforms. * Ability to create and present entity relationship diagrams, logical data models, and high-level architecture diagrams. * Familiarity with data modelling standards, metadata management, and data governance. * Knowledge of More ❯
and stakeholders to implement big data solutions that provide actionable insights to the business. You will participate in the design and development of services using a wide range of AWS technologies (e.g. EMR, Lambda, ECS, Quicksight, Neptune). You will stay abreast of emerging technologies, their constraints and strengths and understand how to couple and when to use More ❯
related fields Advanced analytical framework and experience relating data insight with business problems and creating appropriate dashboards Mandatory required high proficiency in ETL, SQL and database management Experience with AWS services like Glue, Athena, Redshift, Lambda, S3 Python programming experience using data libraries like pandas and numpy etc Interest in machine learning, logistic regression and emerging solutions for … but not mandatory required: Experience in startup or fintech will be considered a great advantage Awareness or Hands-on experience with ML-AI implementation or ML-Ops Certification in AWS foundation Opportunities to Take Ownership - Work on high-impact projects with real autonomy. Fast Career Growth - Gain exposure to multiple business areas and advance quickly. Be at the Forefront More ❯
AI workflows. Key Responsibilities Architect and build scalable data pipelines for ingestion, transformation, enrichment, and publication. Integrate AI/ML workflows, including training data pipelines and inference outputs. Lead AWS-based data engineering using services like S3, Lambda, Glue, and Athena. Ensure interoperability with global research infrastructures (e.g. DiSSCo UK, RECODE, DToL). Define best practices in data More ❯