City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
become a trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, Apache Flink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The … ideal candidate will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, Apache Flink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident … management, architecture, and performance. The Role: *Maintaining and monitoring real-time and batch data pipelines using Flink, Kafka, Python, and AWS *Act as an escalation point for critical data incidents and lead root cause analysis *Optimising system performance, define SLIs/SLOs, and drive reliability *Woking closely with various other departments and teams to architect scalable, fault-tolerant data More ❯
Key Responsibilities Lead the design, development, and deployment of scalable full stack applications using React and AWS services. Collaborate with cross-functional teams including Product, Architecture, and Test to deliver high-quality software. Champion best practices in software engineering, including CI/CD, automated testing, and code reviews. Mentor junior engineers and contribute to a culture of continuous improvement. … Participate in sprint planning, technical design sessions, and architectural reviews. Required Skills & Experience Strong proficiency in React. Deep experience with AWS services such as Lambda, API Gateway, DynamoDB, S3, and CloudFormation. Solid understanding of TypeScript, Node.js, and RESTful API design. Familiarity with DevOps practices and tools (e.g., GitHub Actions, Terraform, CloudWatch). Experience working in Agile environments and … contributing to sprint ceremonies. Ability to challenge existing approaches and drive innovation within the team. Desirable Exposure to AWS AI services (e.g., Lex, Bedrock). Experience with serverless architectures and event-driven design patterns. Familiarity with containerization (Docker, ECS) and observability tooling. Team Fit A proactive mindset with a passion for mentoring and uplifting team performance. Strong communication skills More ❯
london (city of london), south east england, united kingdom
Halian Technology Limited
experience Proficiency inNode.js. Good understanding of software engineering principles, REST APIs, and asynchronous programming Experience with modern frontend development, ideally withReact Nice to Have: Experience withNestJS Cloud experience withAWSservices (Lambda, API Gateway, S3, etc.) Exposure to payments or fintech environments This role is 3 days on site in central London and offers a quick 2 - 3 stage interview process More ❯
deploy LangGraph-based agentic systems that orchestrate LLM-driven tools and workflows. · Build and integrate modular AI agents capable of real-world task execution in cloud-native environments. · Utilize AWS services such as Lambda, Step Functions, Bedrock, S3, ECS/Fargate, DynamoDB, and API Gateway to support scalable, serverless infrastructure. · Write production-grade Python code, following best practices … or LLM-centric development. · Proven track record with LangGraph, LangChain, or similar orchestration frameworks. · Expert in Python (asyncio, FastAPI preferred). · Hands-on experience building and deploying applications on AWS, particularly using Lambda, Fargate, S3, Step Functions, and DynamoDB. · Familiarity with AWS Bedrock is a plus. · Strong understanding of agentic patterns, prompt chaining, tool calling, and memory More ❯
Media, Retail and CPG, and Public Services. Consolidated revenues as of 12 months ending December 2024 totaled $13.8 billion. Job Summary: We are seeking a highly skilled and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena … AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. … and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and maintain CI/CD pipelines for data workflows using tools like AWS CodePipeline, Git, GitHub Actions. • Ensure data quality, lineage, and observability. • Mentor junior engineers and establish coding and design standards across the team. • Monitor and optimize performance of data pipelines More ❯
london (city of london), south east england, united kingdom
HCLTech
Media, Retail and CPG, and Public Services. Consolidated revenues as of 12 months ending December 2024 totaled $13.8 billion. Job Summary: We are seeking a highly skilled and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena … AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. … and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and maintain CI/CD pipelines for data workflows using tools like AWS CodePipeline, Git, GitHub Actions. • Ensure data quality, lineage, and observability. • Mentor junior engineers and establish coding and design standards across the team. • Monitor and optimize performance of data pipelines More ❯
Bitbucket/GitHub, Sonar Cube, CAST, Team City/Jenkins/Azure DevOps Expert level knowledge of telemetry and observability platforms like ELK stack, Grafana, Kibana, Azure Application Insights, AWS Cloud Watch etc., Scripting languages preferably python, PowerShell Database technologies preferably MS SQL Server, Postgres SQL Infrastructure as code – AWS Cloudformation/Terraform/Ansible/Chef AWS cloud native development using EC2, Lambda, S3, Simple Que Service etc. More ❯
london (city of london), south east england, united kingdom
rmg digital
Bitbucket/GitHub, Sonar Cube, CAST, Team City/Jenkins/Azure DevOps Expert level knowledge of telemetry and observability platforms like ELK stack, Grafana, Kibana, Azure Application Insights, AWS Cloud Watch etc., Scripting languages preferably python, PowerShell Database technologies preferably MS SQL Server, Postgres SQL Infrastructure as code – AWS Cloudformation/Terraform/Ansible/Chef AWS cloud native development using EC2, Lambda, S3, Simple Que Service etc. More ❯
million UK citizens. With a strong commitment to user-centred design and agile delivery, and more to deliver innovative digital services that matter Preferred Tech Stack Expertise Cloud Infrastructure: AWS (EKS, RDS, Aurora, ElastiCache, Kafka, IAM) Secure Hosting: Experience working with air-gapped or government-secure environments Container & Cluster Management: Docker, Kubernetes, Rancher, Jenkins, Helm Monitoring & Observability: Prometheus, Grafana … Jenkins, Git, ServiceNow, Trivy, Terraform Streaming & Messaging: Apache Kafka (including Kafka Replication) Data Layers: PostgreSQL, Redis, RDLs Automation: IaC, pipeline build automation, event relay tooling Scripting: Bash, Python, Groovy, Lambda functions Responsibilities Run, manage, and continuously evolve the AWS and secure on-premise environments to ensure availability. Lead Level 3 (L3) production support and non-production environment maintenance More ❯
london (city of london), south east england, united kingdom
Scrumconnect Consulting
million UK citizens. With a strong commitment to user-centred design and agile delivery, and more to deliver innovative digital services that matter Preferred Tech Stack Expertise Cloud Infrastructure: AWS (EKS, RDS, Aurora, ElastiCache, Kafka, IAM) Secure Hosting: Experience working with air-gapped or government-secure environments Container & Cluster Management: Docker, Kubernetes, Rancher, Jenkins, Helm Monitoring & Observability: Prometheus, Grafana … Jenkins, Git, ServiceNow, Trivy, Terraform Streaming & Messaging: Apache Kafka (including Kafka Replication) Data Layers: PostgreSQL, Redis, RDLs Automation: IaC, pipeline build automation, event relay tooling Scripting: Bash, Python, Groovy, Lambda functions Responsibilities Run, manage, and continuously evolve the AWS and secure on-premise environments to ensure availability. Lead Level 3 (L3) production support and non-production environment maintenance More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Publicis Production
play a pivotal role in designing scalable data pipelines, optimising data workflows, and ensuring data availability and quality for production technology. The ideal candidate brings deep technical expertise in AWS, GCP and/or Databricks alongside essential hands-on experience building pipelines in Python, analysing data requirements with SQL, and modern data engineering practices. Your ability to work across … ability to progress with design, build and validate output data independently. Deep proficiency in Python (including PySpark), SQL, and cloud-based data engineering tools. Expertise in multiple cloud platforms (AWS, GCP, or Azure) and managing cloud-based data infrastructure. Strong background in database technologies (SQL Server, Redshift, PostgreSQL, Oracle). Desirable Skills: Familiarity with machine learning pipelines and MLOps … practices. Additional experience with Databricks and specific AWS such as Glue, S3, Lambda Proficient in Git, CI/CD pipelines, and DevOps tools (e.g., Azure DevOps) Hands-on experience with web scraping, REST API integrations, and streaming data pipelines. Knowledge of JavaScript and front-end frameworks (e.g., React) Key Responsibilities: Architect and maintain robust data pipelines (batch and More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Publicis Production
play a pivotal role in designing scalable data pipelines, optimising data workflows, and ensuring data availability and quality for production technology. The ideal candidate brings deep technical expertise in AWS, GCP and/or Databricks alongside essential hands-on experience building pipelines in Python, analysing data requirements with SQL, and modern data engineering practices. Your ability to work across … ability to progress with design, build and validate output data independently. Deep proficiency in Python (including PySpark), SQL, and cloud-based data engineering tools. Expertise in multiple cloud platforms (AWS, GCP, or Azure) and managing cloud-based data infrastructure. Strong background in database technologies (SQL Server, Redshift, PostgreSQL, Oracle). Desirable Skills: Familiarity with machine learning pipelines and MLOps … practices. Additional experience with Databricks and specific AWS such as Glue, S3, Lambda Proficient in Git, CI/CD pipelines, and DevOps tools (e.g., Azure DevOps) Hands-on experience with web scraping, REST API integrations, and streaming data pipelines. Knowledge of JavaScript and front-end frameworks (e.g., React) Key Responsibilities: Architect and maintain robust data pipelines (batch and More ❯