GCP DevOps Engineer – AI Infrastructure (Financial Services) Location: London Contract Type: Full-time/Permanent 2 days required at Office and 3 from home per week £60000 - 80000 DOE Join a forward-thinking financial services firm leveraging AI to transform decision-making and customer experiences. We’re seeking a skilled GCP DevOps Engineer to architect and maintain cloud-native infrastructure … a hands-on technical role with strategic impact across infrastructure, automation, and security. Key Responsibilities Infrastructure & Cloud Engineering Design, build, and manage scalable, resilient infrastructure on GoogleCloudPlatform (GCP) . Implement Infrastructure as Code (IaC) using Terraform to ensure consistent and secure deployments. Utilize GCP services such as Compute Engine , Cloud Run , Cloud Functions , BigQuery , and Kubernetes to support … up monitoring and alerting systems using Cloud Monitoring , Cloud Logging , Prometheus Troubleshoot infrastructure issues and ensure minimal downtime for critical AI services. Required Skills Strong hands-on experience with GCP services : Compute Engine, Kubernetes, Cloud Storage, BigQuery, Cloud Run. Proficient in scripting with Python or Bash . Deep understanding of Docker and Kubernetes for containerization and orchestration. Expertise in CI More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Peaple Talent
GCP Data Engineer | London | £50,000 - £70,000 Peaple Talent have partnered with a specialist data consultancy delivering services across data engineering, data strategy, data migration, BI & analytics. With a diverse portfolio of clients and cutting-edge projects, our client is a trusted name in the data consulting space. Due to exciting growth plans, we are now looking for a … Data Engineer , specialising in GoogleCloudPlatform (GCP) and BigQuery . We are looking for: Demonstrable data engineering/BI skills, with a focus on having delivered solutions in GoogleCloudPlatform (GCP) Strong experience designing and delivering data solutions using BigQuery Proficient in SQL and Python Experience working with Big Data technologies such as Apache Spark or PySpark Excellent … communication skills, with the ability to engage effectively with senior stakeholders Nice to haves: GCP Data Engineering certifications BigQuery or other GCP tool certifications What’s in it for you: 📍 Location: London (Hybrid) 💻 Remote working: Occasional office visits each month 💰 Salary: £50,000–£70,000 DOE 🤝 Collaborative culture and strong team support 📈 Vast L&D opportunities both internally and externally More ❯
slough, south east england, united kingdom Hybrid / WFH Options
X4 Technology
problem-solving skills and a proactive approach to technical challenges Strong communication and collaboration skills, comfortable working in cross-functional teams Desirable: Exposure to cloud platforms (AWS, Azure, or GCP) Experience with containerisation (Docker, Kubernetes) Familiarity with CI/CD tools and automation pipelines Understanding of infrastructure-as-code tools (Terraform, Ansible, etc.) If you have a background in the More ❯
Practical understanding of cloud security, networking protocols, and secure deployments. Experienced in observability stacks—logging (e.g., ELK), monitoring (e.g., Prometheus), and tracing (e.g., Jaeger, OpenTelemetry). Cloud certifications (AWS, GCP, or similar) are advantageous. Excellent problem-solving skills, especially in high-availability environments (broadcasting or streaming experience is a plus). Enthusiastic, proactive team player with a passion for innovation. More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Sanderson Government & Defence
Ansible, or CloudFormation) Strong background in Linux systems and scripting (Bash or Python) Exposure to containerisation and orchestration (Docker, Kubernetes, or OpenShift) Understanding of cloud services (AWS, Azure, or GCP) Experience working in secure or regulated environments Collaborative mindset with strong problem-solving and communication skills Desirable Skills Familiarity with monitoring and logging tools (ELK, Grafana, Prometheus) Experience integrating security More ❯
level DevOps, MLOps, or related infrastructure-focused engineering role. Strong proficiency in Python and familiarity with ML frameworks such as TensorFlow or PyTorch. Deep experience with cloud platforms (AWS, GCP, or Azure) and container orchestration tools (Docker, Kubernetes). Solid understanding of CI/CD systems (e.g., GitHub Actions, GitLab CI, ArgoCD) and infrastructure-as-code tools (e.g., Terraform, Helm More ❯
and internal engineering teams. Technical Profile 7+ years’ experience in software engineering or cloud architecture , with recent focus on solution design and delivery leadership . Deep experience with AWS, GCP, or Azure , including infrastructure-as-code (Terraform, CloudFormation, Crossplane, Ansible). Proven ability to architect and deploy containerised systems (Docker, Kubernetes) at scale. Proficiency in one or more modern languages More ❯
in Go. Proficiency in React and modern JavaScript/TypeScript. Solid understanding of microservices architecture, RESTful APIs, and service-to-service communication. Experience working with cloud infrastructure (e.g. AWS, GCP or Azure). Strong grasp of software engineering fundamentals: testing, documentation, performance, reliability. Excellent communication and collaboration skills: you're comfortable working cross-functionally with engineers, designers, and product managers. More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Harnham
direction and long-term roadmap. 🎯 What we’re looking for: 5+ years of experience as a Software Engineer, ideally in Python (3.10+). Proven experience developing cloud-native applications (GCP or AWS). Strong understanding of CI/CD pipelines (e.g. GitHub Actions) and containerisation (Docker). Experience designing scalable, secure systems using modern principles (SOLID, TDD). Familiarity with More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Experis UK
and frameworks (e.g., MLflow, Kubeflow, SageMaker, Vertex AI). Experience with data engineering concepts — ETL pipelines, data lakes, and cloud data platforms. Proficiency with cloud services (AWS, Azure, or GCP) for model deployment and orchestration. Knowledge of containerization and orchestration tools (Docker, Kubernetes). Experience integrating ML models into production environments via APIs or microservices. Excellent problem-solving, analytical, and More ❯
least 2 years in a leadership or managerial role. Experience managing complex IT environments, including development, testing, staging, and production setups. Hands-on experience with cloud services (AWS, Azure, GCP) and infrastructure automation tools (e.g., Terraform, Ansible, Puppet, Chef). Experience with containerisation (e.g., Docker, Kubernetes) and microservices architectures is a plus. Skills: Strong knowledge of IT infrastructure, virtualisation technologies More ❯
reports directly to the Chief Technology Officer (CTO), ensuring close alignment with our technical vision and strategic roadmap. Responsibilities Maintain and evolve cloud infrastructure, primarily on GoogleCloudPlatform (GCP), with some exposure to AWS and Azure Write, review, and manage modular, reusable Terraform code Operate and enhance CI/CD pipelines using GitHub Actions Containerise and deploy backend applications … Infrastructure-as-Code using Terraform (modular and reusable patterns preferred) Strong foundation in Linux, networking, and container technologies Experience deploying and managing applications on at least one major cloud provider (GCP preferred; AWS or Azure acceptable) Experience working with relational databases in production environments (e.g., Postgres, MySQL), including basic performance troubleshooting, migrations, backups, and access control. Familiarity with observability tools … Ability to systematically troubleshoot and debug distributed systems Comfortable reading, modifying, and writing code in Python and/or Node.js Nice to Have Production experience with GoogleCloudPlatform (GCP) Knowledge of security and IAM best practices, including role design, policy management, and access boundaries Experience optimising CI/CD pipelines for speed, reliability, and developer experience (e.g., caching, incremental More ❯
and internal engineering teams. Technical Profile 7+ years’ experience in software engineering or cloud architecture , with recent focus on solution design and delivery leadership . Deep experience with AWS, GCP, or Azure , including infrastructure-as-code (Terraform, CloudFormation, Crossplane, Ansible). Proven ability to architect and deploy containerised systems (Docker, Kubernetes) at scale. Proficiency in one or more modern languages More ❯
and management of cloud resources. Cloud & AWS Knowledge: Strong experience with AWS and native services, particularly EKS, EC2, VPC, S3, and Transit Gateways. Experience with other major cloud platforms (GCP, Azure) is a plus. Containerisation & Orchestration: Extensive knowledge of containerisation (Docker) and orchestration platforms (Kubernetes). Ability to deploy and manage clusters at scale. Database Familiarity: Solid understanding of database More ❯
of professional experience in backend or software engineering. Deep expertise in Python and frameworks such as FastAPI, Flask, or Django. Proven experience building and scaling APIs in cloud environments (GCP, AWS, or Azure). Strong understanding of version control, CI/CD workflows, and containerised development. Excellent communication and problem-solving skills. Bonus Points For: Experience in AI-driven or More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Az-Tec Talent
delivery background. Familiarity with tools such as dbt, Fivetran, Matillion , or similar. Programming skills in Python, Java, or Scala . Cloud certifications (SnowPro, Databricks Certified, AWS/Azure/GCP). Knowledge of DataOps, CI/CD , and infrastructure-as-code concepts. What’s on Offer Hybrid working model (2–3 days per week in London). Competitive salary More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Billigence
data tools such as dbt, Fivetran, Matillion, or similar data integration platforms Programming skills in Python, Java, or Scala Relevant cloud certifications (SnowPro, Databricks Certified, AWS/Azure/GCP Data Engineering certifications) Experience with DataOps, CI/CD practices, and infrastructure-as-code Knowledge of data governance, data quality frameworks, and metadata management Benefits: Hybrid/remote working environment More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Cegeka UK Limited
and microservices architecture. Basic knowledge of front-end technologies (HTML, CSS, JavaScript). Understanding of Agile methodologies. Familiarity with CI/CD pipelines and cloud platforms (e.g., AWS, Azure, GCP). Familiarity with Cloud technologies such as Azure and AWS. Familiarity with Atlassian applications such as Jira, Confluence and Bitbucket. What We Offer: Competitive salary and benefits including private healthcare More ❯
design, growth, and other engineers to ship high-impact features Ensuring scalability, performance, security, reliability and maintainability across the full stack Contributing to cloud infrastructure setup and optimisation (AWS, GCP or Azure), containerisation, deployment pipelines, monitoring and optimisation Implementing CI/CD pipelines, automated testing, monitoring, and related DevOps practices Mentoring junior engineers and helping improve engineering practices across the More ❯
Consul etc..) • Continuous Integration practices and Continuous Deployment/Delivery • Containers – Docker, Kubernetes etc. • Configuration Management – Ansible, Chef, Puppet etc. • Cloud – AWS preferred; multi clould experience ie with Azure, GCP etc. highly desirable • Monitoring – ELK, Prometheus, Splunk, Grafana, etc. • Experience in one of the following scripting language: Java, Bash, Python, Powershell, Golang, etc. • Experience working with Linux and/or More ❯
AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD pipelines • Excellent problem-solving and communication abilities Preferred: Candidates with experience in Confluent Kafka and its ecosystem will be given More ❯
/ELT design, and data lakehouse concepts. Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Explore Group
and project delivery timelines. Stay current with emerging technologies, especially in AI, data processing, and scalable web architectures. Tech Stack Frontend: JavaScript, React Backend: Node.js, Python Infrastructure: AWS/GCP (experience with modern cloud environments is a plus) AI/ML: Exposure to applied AI, data models, or large language models preferred Tooling: GitHub, Docker, CI/CD pipelines About More ❯