Responsibilities Design and prototype AI-powered features using tools like OpenAI , Anthropic , and Google Vertex AI . Develop full-stack solutions using Python and TypeScript in a Next.js/AWS Amplify stack. Implement scalable, event-driven workflows with AWSStepFunctions and Lambda . Integrate and normalize financial data across formats including JSON , XML , and CSV … or open-source projects demonstrating your curiosity and skills. Comfortable balancing innovation with compliance, privacy, and security best practices. Technology Stack Languages: Python, TypeScript Frameworks: Next.js, LangChain Cloud & Infrastructure: AWS Amplify, AWS Lambda, AWSStepFunctions AI Platforms: OpenAI, Anthropic, Google (Gemini/Vertex AI) Data Formats: JSON, XML, CSV Dev Tools: Cursor, Windsurf, Loom More ❯
Senior Cloud Data Engineer (AWS), Flutter Functions page is loaded Senior Cloud Data Engineer (AWS), Flutter Functions Apply locations London, UK Cluj-Napoca, Romania Porto, Portugal time type Full time posted on Posted 30+ Days Ago job requisition id JR127551 Senior Cloud Data Engineer (AWS), Flutter Functions Senior Cloud Data Engineer This role is … Please note: We are unable to provide visa sponsorship for this position. Candidates must have the right to work in the UK (or applicable country) without sponsorship. About Flutter Functions The Flutter Functions division is a key component of Flutter Entertainment, responsible for providing essential support and services across the organization. The division encompasses various corporate functions … finance, legal, human resources, technology, and more, ensuring seamless operations and strategic alignment throughout the company. Flutter consists of two commercial divisions (FanDuel and International) and our central Flutter Functions; COO, Finance & Legal. Here in Flutter Functions we work with colleagues across all our divisions and regions to deliver something we call the Flutter Edge. It's our More ❯
part of Theodo Group, we collaborate with a network of 10 companies globally, focusing on varied sectors! Role Overview: As a Cloud Engineer, you'll build innovative products using AWS Serverless, working closely with developers and architects. You'll be hands-on coding while also engaging in client-facing discussions, bridging technical and business conversations. Our core stack is … AWS Serverless & TypeScript, but you'll gain exposure to different technologies and ways of working. You'll be part of a team of top Cloud Engineers, with plenty of opportunities for skill development and growth. We focus on delivering high-quality work that drives long-term improvements for our clients. What we're looking for: Core Skills: 2+ years … experience in cloud engineering Proficiency with TypeScript & Node Experience working with AWS in a production environment Ability to communicate effectively with both technical and non-technical stakeholders Beneficial Skills: Experience with AWS services such as Lambda, DynamoDB, API Gateway, StepFunctions, AppSync, and Event Bridge AWS Certifications Experience with other OOP languages (Python, Java, Go More ❯
range of technologies, including Java, SQL Server/Snowflake databases, Python and C#. We are in the process of migrating more of our data to Snowflake, leveraging technologies like AWS Batch, Apache Flink and AWSStepfunctions for orchestration and Docker containers. These new systems will respond in real-time to events such as position and … commercial software experience predominantly using the technologies listed below. Strong Java and SQL skills required, Python skills a bonus. Knowledge of operating with Cloud engineering platforms and use of AWS services like Batch, Step Function, EKS and Docker containers. Important to have a good understanding core Java and JVM, as well as complex stored procedures and patterns, preferably More ❯
the Director of Product and Technology. Responsibilities Architecture & Solution Design: Oversee the technical architecture of our solutions, ensuring they meet performance, scalability, and security requirements. Design and develop scalable AWS architectures for API-based and data-centric applications. Define data pipelines, ETL processes, and storage solutions using AWS services such as S3, OpenSearch, Redshift, StepFunctions … API management strategies, leveraging tools such as KONG Gateway, Lambda, and EKS. Collaborate with DevOps teams to implement CI/CD pipelines, Infrastructure as Code (IaC), and automation using AWS CloudFormation. Maintain cloud governance, security, and compliance best practices in AWS environments. Strategic & Business Impact: Work closely with the Director of Product and Technology and customers to define … of approved projects and explore new opportunities. Document solutions with specifications, estimates, and delivery timelines. Leadership & Mentorship: Provide technical leadership and mentoring to development teams, ensuring best practices in AWS, API design, and data architecture. Support the technical delivery team with troubleshooting and solution design. Line manage a Product Specialist, providing guidance on product packaging and administrative tasks. Qualifications More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
ve become a trusted partner across a wide range of businesses.In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, Apache Flink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure.The ideal … candidate will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, Apache Flink, Kafka, and Python.This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management … architecture, and performance. The Role: *Maintaining and monitoring real-time and batch data pipelines using Flink, Kafka, Python, and AWS*Act as an escalation point for critical data incidents and lead root cause analysis*Optimising system performance, define SLIs/SLOs, and drive reliability *Woking closely with various other departments and teams to architect scalable, fault-tolerant data solutions More ❯
become a trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, Apache Flink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The … ideal candidate will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, Apache Flink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of … incident management, architecture, and performance. The Role: Maintaining and monitoring real-time and batch data pipelines using Flink, Kafka, Python, and AWS Act as an escalation point for critical data incidents and lead root cause analysis Optimising system performance, define SLIs/SLOs, and drive reliability Woking closely with various other departments and teams to architect scalable, fault-tolerant More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
become a trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, Apache Flink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The … ideal candidate will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, Apache Flink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of … incident management, architecture, and performance. The Role: *Maintaining and monitoring real-time and batch data pipelines using Flink, Kafka, Python, and AWS *Act as an escalation point for critical data incidents and lead root cause analysis *Optimising system performance, define SLIs/SLOs, and drive reliability *Woking closely with various other departments and teams to architect scalable, fault-tolerant More ❯
will help drive the evolution of our data architecture as we move from Redshift to Snowflake. Looking for someone with extensive experience with cloud providers? Hands-on experience with AWS services such as Glue (Spark), Lambda, StepFunctions, ECS, Redshift, and SageMaker. Looking for someone with hands-on development Conducting code reviews, mentoring through pair programming. Looking … Building APIs, integrating with microservices, or contributing to backend systems — not just data pipelines or data modelling. CI/CD and Infrastructure-as-Code Tools like GitHub Actions, Jenkins, AWS CDK, CloudFormation, Terraform. Key Responsibilities: Design and implement scalable, secure, and cost-efficient data solutions on AWS, leveraging services such as Glue, Lambda, S3, Redshift, and Step … higher in a technical discipline Proven experience as a data engineer with strong hands-on programming skills and software engineering fundamentals, with experience building scalable solutions in cloud environments (AWS preferred) Extensive experience in AWS services, e.g. EC2, S3, RDS, DynamoDB, Redshift, Lambda, API Gateway Solid foundation in software engineering principles, including version control (Git), testing, CI/ More ❯
will help drive the evolution of our data architecture as we move from Redshift to Snowflake. Looking for someone with extensive experience with cloud providers? Hands-on experience with AWS services such as Glue (Spark), Lambda, StepFunctions, ECS, Redshift, and SageMaker. Looking for someone with hands-on development Conducting code reviews, mentoring through pair programming. Looking … Building APIs, integrating with microservices, or contributing to backend systems — not just data pipelines or data modelling. CI/CD and Infrastructure-as-Code Tools like GitHub Actions, Jenkins, AWS CDK, CloudFormation, Terraform. Key Responsibilities: Design and implement scalable, secure, and cost-efficient data solutions on AWS, leveraging services such as Glue, Lambda, S3, Redshift, and Step … higher in a technical discipline Proven experience as a data engineer with strong hands-on programming skills and software engineering fundamentals, with experience building scalable solutions in cloud environments (AWS preferred) Extensive experience in AWS services, e.g. EC2, S3, RDS, DynamoDB, Redshift, Lambda, API Gateway Solid foundation in software engineering principles, including version control (Git), testing, CI/ More ❯
will be hybrid 2 days onsite in our Wimbledon office, London. As a Senior Data Engineer, you will lead data migration strategies and implementation across hybrid cloud environments (primarily AWS), enabling smooth and secure movement of legacy and modern systems. As a senior engineer, you'll design, develop, and optimise scalable and efficient data pipelines integrating PHP, and C … applications using .NET framework backend services and React frontends. You'll utilise tools such as Terraform for infrastructure-as-code ( IaC ), AWS (Lambda, EC2, EKS, StepFunctions, VPC etc.) for ETL, Airflow pipelines, Snowflake, and ensure architectural alignment with AI/ML initiatives and data-driven services. You will serve as the go-to engineer for: End … to-end data migration architecture ( on-premise to cloud or cloud-to-cloud). Designing scalable and secure systems using AWS services like S3, Lambda, and EKS, EC2, VPC, RDS. Interfacing with both legacy PHP/C# systems and modern .NET cloud-native services. We're also looking for someone with some experience in AI to help us drive More ❯
Engineering Manager (Hands on) Fully Remote - UK Based £110,000 - £120,000 Typescript, Next.js, React, Postgres, AWS Are you a passionate and experienced hands on Engineering Manager ready to lead and innovate? My client, a market-leading web trading technology company, is looking for a Engineering Manager to join their dynamic team! This is a fantastic chance to take … and processes. Strong problem-solving skills and an analytical, data-driven approach. Proficiency in a recognised programming language (e.g., JavaScript, Python, Java). Preferred Skills: TypeScript, NextJS/React, AWS (Lambda/StepFunctions, Postgres/Dynamo, CDK), Jest, Playwright. My client offers a competitive salary, a share incentive scheme, private health insurance, and a flexible hybrid … progress, teamwork, and a relentless pursuit of excellence. If you're passionate about solving complex problems and thriving in a hyper-growth environment, this could be your next exciting step! Rates depend on experience and client requirements Job Information Job Reference: JO-54 Salary: £110000.00 - £120000.00 per annum Salary per: annum Job Duration: Job Start Date: 29/ More ❯
data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re … across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, stepfunctions … and access control architectures to secure sensitive data You will have: 3+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure More ❯
About: Step forward into the future of technology with ZILO. We're here to redefine what's possible in technology. While we're trusted by the global Transfer Agency sector, our technology is truly flexible and designed to transform any business at scale. We've created a unified platform that adapts to diverse needs, offering the scalability and reliability … matter expertise in data processing and reporting. In this role, you will own the reliability, performance, and operational excellence of our real-time and batch data pipelines built on AWS, Apache Flink, Kafka, and Python. You'll act as the first line of defense for data-related incidents , rapidly diagnose root causes, and implement resilient solutions that keep critical … as on-call escalation for data pipeline incidents, including real-time stream failures and batch job errors. Rapidly analyze logs, metrics, and trace data to pinpoint failure points across AWS, Flink, Kafka, and Python layers. Lead post-incident reviews: identify root causes, document findings, and drive corrective actions to closure. Reliability & Monitoring Design, implement, and maintain robust observability for More ❯
configure software in staging and production environments. Implement fixes and enhancements for internal applications. System Operations and Monitoring Design monitoring solutions for system reliability. Manage software deployments and support AWS infrastructure. Maintain system status reports. Team Collaboration and Knowledge Sharing Coordinate support issues and triage effectively. Contribute to documentation, run-books, and guides. Collaborate across teams to improve operational … processes. PREFERRED QUALIFICATIONS Experience with distributed systems at scale. 2+ years developing software testing suites with frameworks like Junit, TestNg, Selenium, Mockito. 2+ years maintaining cloud infrastructure, especially native AWS services like SQS, Lambda, DynamoDB, StepFunctions, Kinesis. Amazon values an inclusive culture. If you require workplace accommodations during the application or onboarding process, please visit this More ❯
processes to improve efficiency, scalability, reliability and observability. Drive Engineering Excellence: Lead and manage all engineering activities across internal and external teams, ensuring high productivity and quality of execution. AWS Expertise: Strong expertise across AWS products, including S3, Glue, Spark, DBT, Terraform, and Redshift. Roadmap Prioritisation: Prioritize and manage engineering activities and personnel to deliver on a roadmap … Skills for the Data Operations Manager: Technology Degree with at least 5 years’ experience in data Proven experience in managing engineering teams in a fast-paced environment. Knowledge of AWS services and tools, including S3, StepFunctions, Spark, DBT, Terraform, and Redshift. Strong leadership and communication skills, with the ability to inspire and motivate a diverse team. More ❯
processes to improve efficiency, scalability, reliability and observability. Drive Engineering Excellence: Lead and manage all engineering activities across internal and external teams, ensuring high productivity and quality of execution. AWS Expertise: Strong expertise across AWS products, including S3, Glue, Spark, DBT, Terraform, and Redshift. Roadmap Prioritisation: Prioritize and manage engineering activities and personnel to deliver on a roadmap … Skills for the Data Operations Manager: Technology Degree with at least 5 years’ experience in data Proven experience in managing engineering teams in a fast-paced environment. Knowledge of AWS services and tools, including S3, StepFunctions, Spark, DBT, Terraform, and Redshift. Strong leadership and communication skills, with the ability to inspire and motivate a diverse team. More ❯
from scratch Collaborating with cross-functional teams to deliver scalable solutions Mentoring junior engineers and contributing to a strong engineering culture Working with a modern, cloud-native stack Cloud: AWS (Lambda, S3, Kinesis, RDS, StepFunctions, AppFlow) Monitoring: Graphite, Grafana, Splunk Bonus: Experience in marketing tech or AI What We're Looking For Strong full stack engineering More ❯
balance technical correctness with business priorities Takes initiative, identifies problems, and proactively proposes solutions, and is comfortable making key decisions. Powerful collaborator who works well across departments. Our stack AWS as our cloud compute platform Kubernetes (EKS) for container runtime and orchestration RDS (PostgreSQL, MySQL), Kafka, Redis Terraform for infrastructure as code Lambda and StepFunctions Datadog More ❯
balance technical correctness with business priorities Takes initiative, identifies problems, and proactively proposes solutions, and is comfortable making key decisions Powerful collaborator who works well across departments Our stack AWS as our cloud compute platform Kubernetes (EKS) for container runtime and orchestration RDS (PostgreSQL, MySQL), Kafka, Redis Terraform for infrastructure as code Lambda and StepFunctions Datadog More ❯
complete a proof of concept, or build toolkit to help your team. We don't expect you to know it all. Responsibilities: Threat modelling & architecture reviews - break down new AWS-backed services, map trust boundaries, build attack trees, and define security requirements before a single line of code is merged. Security automation - write and maintain IaC-driven checks, custom … Lambda/Step-Functions, CI/CD gates, and CSPM rules so that secure defaults are enforced at scale. Hands-on testing & hardening - abuse the infrastructure you just modelled (cloud-native pen-testing, IAM privilege escalation drills, container escape attempts) and guide remediation in pull-requests. DevSecOps enablement - pair with platform engineers, review Terraform/CloudFormation/Kubernetes … a continuous learning journey. About the candidate: Must-haves A minimum Bachelor's degree (2.1 or higher) is required in Computer Science, or in a Technology-related field Deep AWS internals knowledge Proven threat-modelling chops (STRIDE, attack-trees, or other methodologies ). Strong coding ability in at least one language (Python, Go, Rust, etc.). CI/CD More ❯
Fortune 100 company, we are leading a digital disruption that will redefine how people experience insurance. Key Responsibilities: • Design and implement scalable, secure, and cost-efficient data solutions on AWS, leveraging services such as Glue, Lambda, S3, Redshift, and Step Functions. • Lead the development of robust data pipelines and analytics platforms, ensuring high availability, performance, and maintainability. • Demonstrate … higher in a technical discipline • Proven experience as a data engineer with strong hands-on programming skills and software engineering fundamentals, with experience building scalable solutions in cloud environments (AWS preferred) • Extensive experience in AWS services, e.g. EC2, S3, RDS, DynamoDB, Redshift, Lambda, API Gateway • Solid foundation in software engineering principles, including version control (Git), testing, CI/… Proficiency in at least one programming language, with Python strongly preferred for data processing, automation, and pipeline development • Strong acumen for application health through performance monitoring, logging, and debugging • AWS or Snowflake certifications are a plus About Liberty Specialty Markets (LSM) Liberty Specialty Markets is part of Global Risk Solutions and the broader Liberty Mutual Insurance Group, which is More ❯
financial industry, is highly desirable. Can talk to code i.e., read code/review code/paired programming/debugging code related performance issues, SQL tuning etc. Experience with AWS services such as S3, RDS, Aurora, NoSQL, MSK (Kafka) Experience with batch processing/ETL using Glue Jobs, AWS Lambda and Step functions. Experience with designing bespoke More ❯
Relational and Nonrelational databases/integrating with third party services. Ability to read and review code, paired programming, and debugging code related performance issues, SQL tuning, etc. Experience with AWS services such as S3, RDS, Aurora, NoSQL, MSK (Kafka). Experience with batch processing/ETL using Glue Jobs, AWS Lambda, and Step functions. Experience with designing More ❯
Relational and Nonrelational databases/integrating with third party services. Ability to read and review code, paired programming, and debugging code related performance issues, SQL tuning, etc. Experience with AWS services such as S3, RDS, Aurora, NoSQL, MSK (Kafka). Experience with batch processing/ETL using Glue Jobs, AWS Lambda, and Step functions. Experience with designing More ❯