Responsibilities Design and prototype AI-powered features using tools like OpenAI , Anthropic , and Google Vertex AI . Develop full-stack solutions using Python and TypeScript in a Next.js/AWS Amplify stack. Implement scalable, event-driven workflows with AWSStepFunctions and Lambda . Integrate and normalize financial data across formats including JSON , XML , and CSV … or open-source projects demonstrating your curiosity and skills. Comfortable balancing innovation with compliance, privacy, and security best practices. Technology Stack Languages: Python, TypeScript Frameworks: Next.js, LangChain Cloud & Infrastructure: AWS Amplify, AWS Lambda, AWSStepFunctions AI Platforms: OpenAI, Anthropic, Google (Gemini/Vertex AI) Data Formats: JSON, XML, CSV Dev Tools: Cursor, Windsurf, Loom More ❯
Senior Cloud Data Engineer (AWS), Flutter Functions page is loaded Senior Cloud Data Engineer (AWS), Flutter Functions Apply locations London, UK Cluj-Napoca, Romania Porto, Portugal time type Full time posted on Posted 30+ Days Ago job requisition id JR127551 Senior Cloud Data Engineer (AWS), Flutter Functions Senior Cloud Data Engineer This role is … Please note: We are unable to provide visa sponsorship for this position. Candidates must have the right to work in the UK (or applicable country) without sponsorship. About Flutter Functions The Flutter Functions division is a key component of Flutter Entertainment, responsible for providing essential support and services across the organization. The division encompasses various corporate functions … finance, legal, human resources, technology, and more, ensuring seamless operations and strategic alignment throughout the company. Flutter consists of two commercial divisions (FanDuel and International) and our central Flutter Functions; COO, Finance & Legal. Here in Flutter Functions we work with colleagues across all our divisions and regions to deliver something we call the Flutter Edge. It's our More ❯
part of Theodo Group, we collaborate with a network of 10 companies globally, focusing on varied sectors! Role Overview: As a Cloud Engineer, you'll build innovative products using AWS Serverless, working closely with developers and architects. You'll be hands-on coding while also engaging in client-facing discussions, bridging technical and business conversations. Our core stack is … AWS Serverless & TypeScript, but you'll gain exposure to different technologies and ways of working. You'll be part of a team of top Cloud Engineers, with plenty of opportunities for skill development and growth. We focus on delivering high-quality work that drives long-term improvements for our clients. What we're looking for: Core Skills: 2+ years … experience in cloud engineering Proficiency with TypeScript & Node Experience working with AWS in a production environment Ability to communicate effectively with both technical and non-technical stakeholders Beneficial Skills: Experience with AWS services such as Lambda, DynamoDB, API Gateway, StepFunctions, AppSync, and Event Bridge AWS Certifications Experience with other OOP languages (Python, Java, Go More ❯
deploy LangGraph-based agentic systems that orchestrate LLM-driven tools and workflows. · Build and integrate modular AI agents capable of real-world task execution in cloud-native environments. · Utilize AWS services such as Lambda, StepFunctions, Bedrock, S3, ECS/Fargate, DynamoDB, and API Gateway to support scalable, serverless infrastructure. · Write production-grade Python code, following best … or LLM-centric development. · Proven track record with LangGraph, LangChain, or similar orchestration frameworks. · Expert in Python (asyncio, FastAPI preferred). · Hands-on experience building and deploying applications on AWS, particularly using Lambda, Fargate, S3, StepFunctions, and DynamoDB. · Familiarity with AWS Bedrock is a plus. · Strong understanding of agentic patterns, prompt chaining, tool calling, and More ❯
execution in cloud-native environments. Collaborate with product managers, solution architects, and other engineers to design end-to-end LLM-powered experiences for internal and customer-facing applications. Utilize AWS services such as Lambda, StepFunctions, Bedrock, S3, ECS/Fargate, DynamoDB, and API Gateway to support scalable, serverless infrastructure. Write production-grade Python code, following best … or LLM-centric development. Proven track record with LangGraph, LangChain, or similar orchestration frameworks. Expert in Python (asyncio, FastAPI preferred). Hands-on experience building and deploying applications on AWS, particularly using Lambda, Fargate, S3, StepFunctions, and DynamoDB. Familiarity with AWS Bedrock is a plus. Strong understanding of agentic patterns, prompt chaining, tool calling, and More ❯
range of technologies, including Java, SQL Server/Snowflake databases, Python and C#. We are in the process of migrating more of our data to Snowflake, leveraging technologies like AWS Batch, Apache Flink and AWSStepfunctions for orchestration and Docker containers. These new systems will respond in real-time to events such as position and … commercial software experience predominantly using the technologies listed below. Strong Java and SQL skills required, Python skills a bonus. Knowledge of operating with Cloud engineering platforms and use of AWS services like Batch, Step Function, EKS and Docker containers. Important to have a good understanding core Java and JVM, as well as complex stored procedures and patterns, preferably More ❯
the Director of Product and Technology. Responsibilities Architecture & Solution Design: Oversee the technical architecture of our solutions, ensuring they meet performance, scalability, and security requirements. Design and develop scalable AWS architectures for API-based and data-centric applications. Define data pipelines, ETL processes, and storage solutions using AWS services such as S3, OpenSearch, Redshift, StepFunctions … API management strategies, leveraging tools such as KONG Gateway, Lambda, and EKS. Collaborate with DevOps teams to implement CI/CD pipelines, Infrastructure as Code (IaC), and automation using AWS CloudFormation. Maintain cloud governance, security, and compliance best practices in AWS environments. Strategic & Business Impact: Work closely with the Director of Product and Technology and customers to define … of approved projects and explore new opportunities. Document solutions with specifications, estimates, and delivery timelines. Leadership & Mentorship: Provide technical leadership and mentoring to development teams, ensuring best practices in AWS, API design, and data architecture. Support the technical delivery team with troubleshooting and solution design. Line manage a Product Specialist, providing guidance on product packaging and administrative tasks. Qualifications More ❯
will help drive the evolution of our data architecture as we move from Redshift to Snowflake. Looking for someone with extensive experience with cloud providers? Hands-on experience with AWS services such as Glue (Spark), Lambda, StepFunctions, ECS, Redshift, and SageMaker. Looking for someone with hands-on development Conducting code reviews, mentoring through pair programming. Looking … Building APIs, integrating with microservices, or contributing to backend systems not just data pipelines or data modelling. CI/CD and Infrastructure-as-Code Tools like GitHub Actions, Jenkins, AWS CDK, CloudFormation, Terraform. Key Responsibilities: Design and implement scalable, secure, and cost-efficient data solutions on AWS, leveraging services such as Glue, Lambda, S3, Redshift, and Step … higher in a technical discipline Proven experience as a data engineer with strong hands-on programming skills and software engineering fundamentals, with experience building scalable solutions in cloud environments (AWS preferred) Extensive experience in AWS services, e.g. EC2, S3, RDS, DynamoDB, Redshift, Lambda, API Gateway Solid foundation in software engineering principles, including version control (Git), testing, CI/ More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
ve become a trusted partner across a wide range of businesses.In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, Apache Flink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure.The ideal … candidate will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, Apache Flink, Kafka, and Python.This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management … architecture, and performance. The Role: *Maintaining and monitoring real-time and batch data pipelines using Flink, Kafka, Python, and AWS*Act as an escalation point for critical data incidents and lead root cause analysis*Optimising system performance, define SLIs/SLOs, and drive reliability *Woking closely with various other departments and teams to architect scalable, fault-tolerant data solutions More ❯
become a trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, Apache Flink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The … ideal candidate will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, Apache Flink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of … incident management, architecture, and performance. The Role: Maintaining and monitoring real-time and batch data pipelines using Flink, Kafka, Python, and AWS Act as an escalation point for critical data incidents and lead root cause analysis Optimising system performance, define SLIs/SLOs, and drive reliability Woking closely with various other departments and teams to architect scalable, fault-tolerant More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
become a trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, Apache Flink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The … ideal candidate will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, Apache Flink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of … incident management, architecture, and performance. The Role: *Maintaining and monitoring real-time and batch data pipelines using Flink, Kafka, Python, and AWS *Act as an escalation point for critical data incidents and lead root cause analysis *Optimising system performance, define SLIs/SLOs, and drive reliability *Woking closely with various other departments and teams to architect scalable, fault-tolerant More ❯
will help drive the evolution of our data architecture as we move from Redshift to Snowflake. Looking for someone with extensive experience with cloud providers? Hands-on experience with AWS services such as Glue (Spark), Lambda, StepFunctions, ECS, Redshift, and SageMaker. Looking for someone with hands-on development Conducting code reviews, mentoring through pair programming. Looking … Building APIs, integrating with microservices, or contributing to backend systems — not just data pipelines or data modelling. CI/CD and Infrastructure-as-Code Tools like GitHub Actions, Jenkins, AWS CDK, CloudFormation, Terraform. Key Responsibilities: Design and implement scalable, secure, and cost-efficient data solutions on AWS, leveraging services such as Glue, Lambda, S3, Redshift, and Step … higher in a technical discipline Proven experience as a data engineer with strong hands-on programming skills and software engineering fundamentals, with experience building scalable solutions in cloud environments (AWS preferred) Extensive experience in AWS services, e.g. EC2, S3, RDS, DynamoDB, Redshift, Lambda, API Gateway Solid foundation in software engineering principles, including version control (Git), testing, CI/ More ❯
will help drive the evolution of our data architecture as we move from Redshift to Snowflake. Looking for someone with extensive experience with cloud providers? Hands-on experience with AWS services such as Glue (Spark), Lambda, StepFunctions, ECS, Redshift, and SageMaker. Looking for someone with hands-on development Conducting code reviews, mentoring through pair programming. Looking … Building APIs, integrating with microservices, or contributing to backend systems — not just data pipelines or data modelling. CI/CD and Infrastructure-as-Code Tools like GitHub Actions, Jenkins, AWS CDK, CloudFormation, Terraform. Key Responsibilities: Design and implement scalable, secure, and cost-efficient data solutions on AWS, leveraging services such as Glue, Lambda, S3, Redshift, and Step … higher in a technical discipline Proven experience as a data engineer with strong hands-on programming skills and software engineering fundamentals, with experience building scalable solutions in cloud environments (AWS preferred) Extensive experience in AWS services, e.g. EC2, S3, RDS, DynamoDB, Redshift, Lambda, API Gateway Solid foundation in software engineering principles, including version control (Git), testing, CI/ More ❯
will be hybrid 2 days onsite in our Wimbledon office, London. As a Senior Data Engineer, you will lead data migration strategies and implementation across hybrid cloud environments (primarily AWS), enabling smooth and secure movement of legacy and modern systems. As a senior engineer, you'll design, develop, and optimise scalable and efficient data pipelines integrating PHP, and C … applications using .NET framework backend services and React frontends. You'll utilise tools such as Terraform for infrastructure-as-code ( IaC ), AWS (Lambda, EC2, EKS, StepFunctions, VPC etc.) for ETL, Airflow pipelines, Snowflake, and ensure architectural alignment with AI/ML initiatives and data-driven services. You will serve as the go-to engineer for: End … to-end data migration architecture ( on-premise to cloud or cloud-to-cloud). Designing scalable and secure systems using AWS services like S3, Lambda, and EKS, EC2, VPC, RDS. Interfacing with both legacy PHP/C# systems and modern .NET cloud-native services. We're also looking for someone with some experience in AI to help us drive More ❯
Engineering Manager (Hands on) Fully Remote - UK Based £110,000 - £120,000 Typescript, Next.js, React, Postgres, AWS Are you a passionate and experienced hands on Engineering Manager ready to lead and innovate? My client, a market-leading web trading technology company, is looking for a Engineering Manager to join their dynamic team! This is a fantastic chance to take … and processes. Strong problem-solving skills and an analytical, data-driven approach. Proficiency in a recognised programming language (e.g., JavaScript, Python, Java). Preferred Skills: TypeScript, NextJS/React, AWS (Lambda/StepFunctions, Postgres/Dynamo, CDK), Jest, Playwright. My client offers a competitive salary, a share incentive scheme, private health insurance, and a flexible hybrid … progress, teamwork, and a relentless pursuit of excellence. If you're passionate about solving complex problems and thriving in a hyper-growth environment, this could be your next exciting step! Rates depend on experience and client requirements Job Information Job Reference: JO-54 Salary: £110000.00 - £120000.00 per annum Salary per: annum Job Duration: Job Start Date: 29/ More ❯
data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re … across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, stepfunctions … and access control architectures to secure sensitive data You will have: 3+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure More ❯
About: Step forward into the future of technology with ZILO. We're here to redefine what's possible in technology. While we're trusted by the global Transfer Agency sector, our technology is truly flexible and designed to transform any business at scale. We've created a unified platform that adapts to diverse needs, offering the scalability and reliability … matter expertise in data processing and reporting. In this role, you will own the reliability, performance, and operational excellence of our real-time and batch data pipelines built on AWS, Apache Flink, Kafka, and Python. You'll act as the first line of defense for data-related incidents , rapidly diagnose root causes, and implement resilient solutions that keep critical … as on-call escalation for data pipeline incidents, including real-time stream failures and batch job errors. Rapidly analyze logs, metrics, and trace data to pinpoint failure points across AWS, Flink, Kafka, and Python layers. Lead post-incident reviews: identify root causes, document findings, and drive corrective actions to closure. Reliability & Monitoring Design, implement, and maintain robust observability for More ❯
and enhancements for internally developed software applications System Operations and Monitoring • Design and implement monitoring solutions to ensure system reliability and performance • Manage software deployments and support infrastructure in AWS and web technology environments • Create and maintain system and support status reports Team Collaboration and Knowledge Management • • Coordinate support issue handoffs within the team, and efficient triaging of incoming … of experience with developing software testing suites using popular testing framework like Junit, TestNg, Selenium, Mockito, etc - 2+ years of building and maintaining cloud computing infrastructure experience, mainly Native AWS (SQS, Lambda, DynamoDb, StepFunctions, Kinesis, etc) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need More ❯
processes to improve efficiency, scalability, reliability and observability. Drive Engineering Excellence: Lead and manage all engineering activities across internal and external teams, ensuring high productivity and quality of execution. AWS Expertise: Strong expertise across AWS products, including S3, Glue, Spark, DBT, Terraform, and Redshift. Roadmap Prioritisation: Prioritize and manage engineering activities and personnel to deliver on a roadmap … Skills for the Data Operations Manager: Technology Degree with at least 5 years’ experience in data Proven experience in managing engineering teams in a fast-paced environment. Knowledge of AWS services and tools, including S3, StepFunctions, Spark, DBT, Terraform, and Redshift. Strong leadership and communication skills, with the ability to inspire and motivate a diverse team. More ❯
processes to improve efficiency, scalability, reliability and observability. Drive Engineering Excellence: Lead and manage all engineering activities across internal and external teams, ensuring high productivity and quality of execution. AWS Expertise: Strong expertise across AWS products, including S3, Glue, Spark, DBT, Terraform, and Redshift. Roadmap Prioritisation: Prioritize and manage engineering activities and personnel to deliver on a roadmap … Skills for the Data Operations Manager: Technology Degree with at least 5 years’ experience in data Proven experience in managing engineering teams in a fast-paced environment. Knowledge of AWS services and tools, including S3, StepFunctions, Spark, DBT, Terraform, and Redshift. Strong leadership and communication skills, with the ability to inspire and motivate a diverse team. More ❯
processes to improve efficiency, scalability, reliability and observability. Drive Engineering Excellence: Lead and manage all engineering activities across internal and external teams, ensuring high productivity and quality of execution. AWS Expertise: Strong expertise across AWS products, including S3, Glue, Spark, DBT, Terraform, and Redshift. Roadmap Prioritisation: Prioritize and manage engineering activities and personnel to deliver on a roadmap … Skills for the Data Product Manager: Technology Degree with at least 5 years’ experience in data Proven experience in managing engineering teams in a fast-paced environment. Knowledge of AWS services and tools, including S3, StepFunctions, Spark, DBT, Terraform, and Redshift. Strong leadership and communication skills, with the ability to inspire and motivate a diverse team. More ❯
from scratch Collaborating with cross-functional teams to deliver scalable solutions Mentoring junior engineers and contributing to a strong engineering culture Working with a modern, cloud-native stack Cloud: AWS (Lambda, S3, Kinesis, RDS, StepFunctions, AppFlow) Monitoring: Graphite, Grafana, Splunk Bonus: Experience in marketing tech or AI What We're Looking For Strong full stack engineering More ❯
balance technical correctness with business priorities Takes initiative, identifies problems, and proactively proposes solutions, and is comfortable making key decisions. Powerful collaborator who works well across departments. Our stack AWS as our cloud compute platform Kubernetes (EKS) for container runtime and orchestration RDS (PostgreSQL, MySQL), Kafka, Redis Terraform for infrastructure as code Lambda and StepFunctions Datadog More ❯
balance technical correctness with business priorities Takes initiative, identifies problems, and proactively proposes solutions, and is comfortable making key decisions Powerful collaborator who works well across departments Our stack AWS as our cloud compute platform Kubernetes (EKS) for container runtime and orchestration RDS (PostgreSQL, MySQL), Kafka, Redis Terraform for infrastructure as code Lambda and StepFunctions Datadog More ❯