Central London, London, United Kingdom Hybrid / WFH Options
Halian Technology Limited
ensure software quality Collaborate on system design and architecture in a distributed AWS environment Implement CI/CD pipelines and deploy applications using AWS services (e.g., ECS, Lambda, DynamoDB, S3) Participate in code reviews, sprint planning, and retrospectives Monitor and troubleshoot production systems; respond to incidents when needed Required Skills & Experience: 3+ years of professional software engineering experience Strong More ❯
Skills & Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI More ❯
ll have the autonomy to make technical decisions and help shape how platform engineering is done as the team continues to scale. Tech stack AWS (Core services - EC2, RDS, S3, IAM, etc.) Configuration Management Ansible Monitoring and Observability Grafana, Prometheus Kubernetes (building and managing production clusters) Terraform (IaC provisioning) Python or Java (scripting, automation) GitHub Actions (CI/CD More ❯
You’ll have the autonomy to make technical decisions and help shape how platform engineering is done as the team continues to scale. Tech stack AWS (Core services - EC2, S3, IAM, etc.) Kubernetes (building and managing production clusters) Terraform (for full IaC provisioning) Python (scripting, automation) GitHub Actions (CI/CD pipelines) Docker & Helm (for containerised app deployments) What More ❯
best practices in devops, aws, and java development. Skill Requirements: Proficient in devops methodologies and tools such as jenkins, docker, and kubernetes. Strong understanding of aws services like ec2, s3, rds, and lambda. Expertise in java programming and frameworks like spring and hibernate. Handson experience in designing and implementing scalable and secure cloud architectures. Ability to troubleshoot complex system More ❯
is 90% remote with travel required once a month to the office. Key Skills and Responsibilities: Design, deliver, and support secure and scalable AWS infrastructure using services like EC2, S3, ECS, and FARGATE Integrate SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing) tools into CI/CD pipelines to enforce secure development practices Automate infrastructure provisioning More ❯
ll have the autonomy to make technical decisions and help shape how platform engineering is done as the team continues to scale. Tech stack AWS (Core services - EC2, RDS, S3, IAM, etc.) Configuration Management Ansible Monitoring and Observability Grafana, Prometheus Kubernetes (building and managing production clusters) Terraform (IaC provisioning) GitHub Actions (CI/CD pipelines) What They’re Looking More ❯
and scripts using Python to enhance operational efficiency and reduce manual intervention. Experience deploying, managing, and optimizing infrastructure on AWS Cloud, including familiarity with AWS services such as EC2, S3, Lambda, and RD Reasonable Adjustments: Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications More ❯
practices. A solid understanding of networking protocols and concepts (TCP/IP, DNS, SSL/TLS, routing, etc.). Proficient with AWS services including EC2, ELB, VPC, IAM, CloudWatch, S3,VPC Lattice, Transit Gateway, VPN and more. Practical knowledge of DevOps tools: Git, Jenkins, Docker, Ansible, Terraform. Strong scripting skills (Bash, Python, or equivalent). Candidates must be eligible More ❯
engineering principles, REST APIs, and asynchronous programming Experience with modern frontend development, ideally with React Nice to Have: Experience with NestJS Cloud experience with AWS services (Lambda, API Gateway, S3, etc.) Exposure to payments or fintech environments This role is 3 days on site in central London and offers a quick 2 - 3 stage interview process with interview slots More ❯
scalable, and efficient data infrastructure. What you’ll be doing – your accountabilities Lead the design and implementation of robust, scalable, and secure data solutions using AWS services such as S3, Glue, Lambda, Redshift, EMR, Kinesis, and more—covering data pipelines, warehousing, and lakehouse architectures. Drive the migration of legacy data workflows to Lakehouse architectures, leveraging Apache Iceberg to enable More ❯
tools, and practices for greater efficiency and impact. The skills you’ll need to succeed Leadership in data engineering and Agile delivery Advanced knowledge of AWS data services (e.g. S3, Glue, EMR, Lambda, Redshift) Expertise in big data technologies and distributed systems Strong coding and optimisation skills (e.g. Python, Spark, SQL) Data quality management and observability Strategic thinking and More ❯
improvement. Participate in sprint planning, technical design sessions, and architectural reviews. Required Skills & Experience Strong proficiency in React. Deep experience with AWS services such as Lambda, API Gateway, DynamoDB, S3, and CloudFormation. Solid understanding of TypeScript, Node.js, and RESTful API design. Familiarity with DevOps practices and tools (e.g., GitHub Actions, Terraform, CloudWatch). Experience working in Agile environments and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Qurated
data lifecycle management, from data creation to governance Deep expertise in AWS cloud, event-driven architecture, and streaming technologies like Kafka Familiarity with data tooling such as Snowflake, Databricks, S3, and broader ecosystem awareness Strong grasp of architectural frameworks (e.g., TOGAF) and enterprise design principles Confident communicator with the ability to engage exec-level audiences and cross-functional stakeholders More ❯
driven tools and workflows. · Build and integrate modular AI agents capable of real-world task execution in cloud-native environments. · Utilize AWS services such as Lambda, Step Functions, Bedrock, S3, ECS/Fargate, DynamoDB, and API Gateway to support scalable, serverless infrastructure. · Write production-grade Python code, following best practices in software design, testing, and documentation. · Build robust CI … track record with LangGraph, LangChain, or similar orchestration frameworks. · Expert in Python (asyncio, FastAPI preferred). · Hands-on experience building and deploying applications on AWS, particularly using Lambda, Fargate, S3, Step Functions, and DynamoDB. · Familiarity with AWS Bedrock is a plus. · Strong understanding of agentic patterns, prompt chaining, tool calling, and memory/state management in LLM applications. · Solid More ❯
ingestion processes from APIs and internal systems, leveraging tools such as Kafka, Spark, or AWS-native services. Cloud Data Platforms - Develop and maintain data lakes and warehouses (e.g., AWS S3, Redshift). Data Quality & Governance - Implement automated validation, testing, and monitoring for data integrity. Performance & Troubleshooting - Monitor workflows, enhance logging/alerting, and fine-tune performance. Data Modelling - Handle … IAM), and GDPR-aligned data practices. Technical Skills & Experience Proficient in Python and SQL for data processing. Solid experience with Apache Airflow - writing and configuring DAGs. Strong AWS skills (S3, Redshift, etc.). Big data experience with Apache Spark. Knowledge of data modelling, schema design, and partitioning. Understanding of batch and streaming data architectures (e.g., Kafka). Experience with More ❯