London, England, United Kingdom Hybrid / WFH Options
Global Relay
any of our openings. Your Role: Global Relay delivers enterprise services to 23,000 customers in 90 countries, including 22 of the top 25 global banks. As a DevOps platform engineer you will be responsible for the smooth operation of on-premise production and lower environment platforms depended on by engineering teams throughout the organisation. Your Job: Given … the varied nature of the role, your time will be split working across the following areas: Automation: Designing and implementing platform solutions that scale with minimal downtime while maintaining highest security standards Operations: Monitor and ensure smooth operation of production and test environments by executing common sysadmin (system administration) tasks and automating repetitive tasks Collaboration:Collaborating with cross functional … able to work under pressure and you are a keen problem solver with a drive for finding efficient solutions to challenging problems. 3+ years experience as a Linux/Platform Engineer engineer or similar role Bachelor degree in Computer Science or related field Experienced in using Python You are comfortable working in a cross geographic team. Understanding of computer More ❯
The Role We're expanding our cloud offering capabilities and seeking a Platform Engineer to strengthen our foundation for future growth. You'll develop our cloud automation strategy, creating scalable, secure infrastructure through code. This role has a significant impact on both our development teams and customers' engineering teams. You'll shape our cloud architecture decisions from the More ❯
and managing both IaaS and PaaS resources, including storage accounts, networks, Azure SQL, VMs, including Windows Server and SQL Server at the OS and application layers Experience with data engineering and analytics platforms, such as Azure Data Factory and Databricks Knowledge of cloud networking concepts, including VPNs, VNETs & peering, subnets, routing, firewalls and DNS Additional qualifications that would be More ❯
Glasgow, Lanarkshire, Scotland, United Kingdom Hybrid / WFH Options
Arnold Clark
Mindset Encourages and supports Team discussions on improvement initiatives Complexity: Investigating and managing any anomalies to resolution and assisting in wider investigations as required. Collaboration: Support and work alongside engineering teams on the latest technologies. Empower development squads to build and deliver software solutions and services Key skills that will benefit you: Experience designing and building solutions within Azure More ❯
Social network you want to login/join with: Oliver Bernard are currently seeking a Senior Platform Engineer to join a well-established team for a FinTech company in London. This hire is part of a period of transformation across the business focused around expanding their global product and instilling a strong DevOps culture whilst driving transformation and innovation. More ❯
London, England, United Kingdom Hybrid / WFH Options
BAE Systems
experts. We work collaboratively across 10 countries to collect, connect, and understand complex data, enabling governments, armed forces, and commercial businesses to unlock digital advantages in demanding environments. Senior Platform Engineer Job Title: Senior Platform Engineer Requisition ID: 121762 Location: London – flexible hybrid working arrangements available. Please discuss options with your recruiter. Grade: GG10 – GG11 Referral Bonus … clients on impactful solutions. Join a growing team that delivers for clients and engages in community outreach to build tech and cyber skills. Role Description We seek experienced Senior Platform Engineers to join our UK Government sector team, contributing to innovative and high-quality solutions. Desired Background Programming in JavaScript, Java, .Net, or Python Designing and building Proof of More ❯
Loved Workplaces. If you embrace challenges, think differently, and want to make an impact, you’ll thrive at Zopa. Follow us on Instagram @zopalife. The Team: As an Associate Platform Engineer in our Cloud Infrastructure Team, your goal is to support and improve the infrastructure that powers Zopa's applications and services, ensuring they are robust, scalable, and secure. More ❯
impact, you’ll thrive here at Zopa. Join us, and make it count. Want to see us in action? Follow us on Instagram @zopalife. The Team As an Associate Platform Engineer in our Cloud Infrastructure Team, your ultimate goal is to support and enhance the infrastructure that powers Zopa's applications and services. The team is dedicated to maintaining More ❯
Role Title: DevOps Engineer/AWS Platform Engineer Role Location: Solihull, UK Role Type: Permanent (Hybrid) Must hold active Security Clearance. Job Description:- About the Role: As a DevOps Engineer, you will be responsible for creating the right environment for your multidisciplinary team to succeed, helping them to self-organize whilst fostering a culture of learning and transparency. You More ❯
Role Title: DevOps Engineer/AWS Platform Engineer Role Location: Solihull, UK Role Type: Permanent (Hybrid) Must hold active Security Clearance. Job Description:- About the Role: As a DevOps Engineer, you will be responsible for creating the right environment for your multidisciplinary team to succeed, helping them to self-organize whilst fostering a culture of learning and transparency. You More ❯
Senior Site Reliability & Platform Engineer Manchester Hybrid/Flexible Working Full-Time Drive better infrastructure and developer experience at scale At Sorted, we're building robust, scalable systems to support modern digital services - and we're looking for a Site Reliability & Platform Engineer to help lead the way. You'll sit at the heart of our engineering operations, bringing together SRE principles and modern platformengineering practices. This includes combining principles of SRE - such as service-level reliability, observability, incident response - with platformengineering practices like GitOps, Infrastructure as Code, DevSecOps automation, and self-service enablement, to help development teams ship faster, safer, and more cost-efficiently. What you'll be doing … Azure-based platforms Applying SRE principles like SLOs, observability, and incident management to drive service reliability Building Infrastructure as Code using Terraform (v1.7+) and GitOps workflows Enabling teams through platform tools, reusable Terraform modules, and self-service infrastructure Enhancing CI/CD pipelines (Azure DevOps, YAML-based) with security scanning and progressive delivery Supporting AKS clusters and Azure services More ❯
The Onyx Research Data Platform organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data/metadata/knowledge … and reducing time spent on "data mechanics" Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale to unlock the value of our combined data assets and predictions in real-time Onyx Product Management is at the heart of our mission, ensuring … responsible for developing and executing the product strategy of our DevOps and Infrastructure platforms to meet the customer needs. You will partner closely with the leaders of Onyx's engineering teams (DevOps and Infrastructure, AI/ML analysis and computing platform, data & knowledge platform, data engineering, UI/UX engineering), along with the Onyx portfolio More ❯
London, England, United Kingdom Hybrid / WFH Options
DataCamp
and data and AI skill development for a more secure future. From our first-class courses, projects, code-alongs, certification programs, and DataLab—we are an all-in-one platform on a mission to democratize data and AI education for all. About the role DataCamp's infrastructure team, which is part of the PlatformEngineering department, is … T-shaped cross functional team that looks after CI/CD pipelines, cloud infrastructure (deployed on AWS), logging, monitoring and security. The infrastructure team also looks after the data platform (deployed on GCP) as we have data engineers embedded in our cross functional infrastructure team. The team helps advise our production engineering teams on infrastructure best practices on … all DataCamp projects and looks after the whole DataCamp Platform to ensure commercial availability for our customers. To facilitate this we have a highly automated CI/CD pipeline based on CircleCI and Spotify Backstage (internal engineering portal) which allows developers to ship what they build, increasing deployment speed and ownership and visibility. The infrastructure team aims to More ❯
ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud-native solutions. Key Responsibilities: Data Pipeline Development & Management: Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured … processing. Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets. Enable data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and PlatformEngineering: Develop and deploy data workflows on AWS or GCP, using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker, orchestrate … with Kubernetes, and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines. Database Engineering: Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models More ❯
ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud-native solutions. Key Responsibilities: Data Pipeline Development & Management: Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured … Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets . Enable data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and PlatformEngineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate … with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines. Database Engineering: Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models More ❯
ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud-native solutions. Key Responsibilities: Data Pipeline Development & Management: Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured … Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets . Enable data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and PlatformEngineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate … with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines. Database Engineering: Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models More ❯
ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud-native solutions. Key Responsibilities: Data Pipeline Development & Management: Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured … Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets . Enable data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and PlatformEngineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate … with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines. Database Engineering: Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models More ❯
london (city of london), south east england, united kingdom
AI71
ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud-native solutions. Key Responsibilities: Data Pipeline Development & Management: Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured … Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets . Enable data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and PlatformEngineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate … with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines. Database Engineering: Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models More ❯
decision platform. The candidate will collaborate with development teams and fellow cloud infrastructure engineers to address critical issues. Proficiency in cloud technologies, containers, Kubernetes, networking, security, scripting, automation, and platformengineering, ensuring seamless system operations. The candidate should possess strong technical aptitude, software development skills, analytical and communication skills, and exceptional problem-solving ability. What You'll Do … highly available, scalable, and secure cloud-based services, promoting efficiency and self-service principles. Develop and maintain automation scripts and tools to streamline infrastructure provisioning, configuration, and deployment, empowering engineering teams. Implement and manage Kubernetes clusters for container orchestration, monitoring, and scaling. Drive an evolution of services to support cloud-native managed services, including an evolution of Kubernetes. Drive … efforts to enhance cloud infrastructure security, including access controls, encryption, and vulnerability assessments, focusing on engineering security solutions. Collaborate on CI/CD (TeamCity) pipelines to automate software deployment, including the build platform (Java, Gradle Enterprise) and QA/Testing tooling to drive DevEx up and CFR to zero, emphasizing engineering and self-service automation. Define and More ❯
ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud-native solutions. Key Responsibilities: Data Pipeline Development & Management: Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured … Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets . Enable data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and PlatformEngineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate … with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines. Database Engineering : Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models More ❯
ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud-native solutions. Key Responsibilities: Data Pipeline Development & Management: Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured … Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets . Enable data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and PlatformEngineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate … with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines. Database Engineering : Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models More ❯
ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud-native solutions. Key Responsibilities: Data Pipeline Development & Management: Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured … Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets . Enable data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and PlatformEngineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate … with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines. Database Engineering : Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models More ❯
ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud-native solutions. Key Responsibilities: Data Pipeline Development & Management: Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured … Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets . Enable data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and PlatformEngineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate … with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines. Database Engineering : Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models More ❯
london (city of london), south east england, united kingdom
Robert Walters
ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud-native solutions. Key Responsibilities: Data Pipeline Development & Management: Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured … Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets . Enable data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and PlatformEngineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate … with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines. Database Engineering : Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models More ❯
ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud-native solutions. Key Responsibilities: Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured, unstructured, and real-time … Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets . Enable data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and PlatformEngineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate … with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines. Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models for unified access More ❯