execution in cloud-native environments. Collaborate with product managers, solution architects, and other engineers to design end-to-end LLM-powered experiences for internal and customer-facing applications. Utilize AWS services such as Lambda, StepFunctions, Bedrock, S3, ECS/Fargate, DynamoDB, and API Gateway to support scalable, serverless infrastructure. Write production-grade Python code, following best … or LLM-centric development. Proven track record with LangGraph, LangChain, or similar orchestration frameworks. Expert in Python (asyncio, FastAPI preferred). Hands-on experience building and deploying applications on AWS, particularly using Lambda, Fargate, S3, StepFunctions, and DynamoDB. Familiarity with AWS Bedrock is a plus. Strong understanding of agentic patterns, prompt chaining, tool calling, and More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
ve become a trusted partner across a wide range of businesses.In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, Apache Flink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure.The ideal … candidate will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, Apache Flink, Kafka, and Python.This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management … architecture, and performance. The Role: *Maintaining and monitoring real-time and batch data pipelines using Flink, Kafka, Python, and AWS*Act as an escalation point for critical data incidents and lead root cause analysis*Optimising system performance, define SLIs/SLOs, and drive reliability *Woking closely with various other departments and teams to architect scalable, fault-tolerant data solutions More ❯
processes to improve efficiency, scalability, reliability and observability. Drive Engineering Excellence: Lead and manage all engineering activities across internal and external teams, ensuring high productivity and quality of execution. AWS Expertise: Strong expertise across AWS products, including S3, Glue, Spark, DBT, Terraform, and Redshift. Roadmap Prioritisation: Prioritize and manage engineering activities and personnel to deliver on a roadmap … Skills for the Data Product Manager: Technology Degree with at least 5 years’ experience in data Proven experience in managing engineering teams in a fast-paced environment. Knowledge of AWS services and tools, including S3, StepFunctions, Spark, DBT, Terraform, and Redshift. Strong leadership and communication skills, with the ability to inspire and motivate a diverse team. More ❯
Milton Keynes, Buckinghamshire, South East, United Kingdom Hybrid / WFH Options
LA International Computer Consultants Ltd
under Bedrock and Sagemaker both Amazon * Python with APIs to ChatGPT * Strong coding skills using libraries like pandas, NumPy, scikit-learn, PyTorch, and Hugging Face Transformers. Key Skills & Experience: AWS Data Science Environment: Hands-on experience with Sage Maker, Lambda, StepFunctions, S3, Athena. Model deployment and pipeline orchestration in AWS. OCR Use-Case Development: Proficiency with More ❯
in Java and at least one other server-side programming language Experience with microservices architecture and distributed systems Strong understanding of data storage technologies (MySQL, Hadoop, Cassandra) Familiarity with AWS services such as RDS, StepFunctions and Kinesis (preferred) Experience with unit, integration and end-to-end testing frameworks Ability to define and monitor SLOs/KPIs More ❯