AI Cyber Security Lead

AI Cyber Security Specialist – GenAI & Agentic Systems

Assignment Type: Temporary, ongoing basis where you will be engaged via Hays

Location: West London

Working Environment: office based on site - hybrid

Pay type: Competitive daily pay rate – inside IR35 contract

Our client

Transform AI is our strategic investment in applied AI, automation, machine learning and modern software engineering. Our mission is to deliver real EBIT impact for our airlines, by deploying AI agents and enterprise automation products at scale.

We operate as a modern engineering organisation backed by world-class research partners. We build reusable AI products. We industrialise them. And we run them reliably across multiple airlines.

We are now hiring an AI Cyber Security Specialist to ensure that our GenAI, Microsoft-based AI solutions, and agentic systems are deployed securely, responsibly and in full alignment with Group Cyber Security policies, standards and assurance processes.

This role has an explicit dual reporting structure:

Primary / functional reporting line: to the AI Factory team within Transform AI.

Dotted-line reporting: to the Group Cyber Security organisation to guarantee full alignment with cyber, risk, assurance and governance frameworks.

You will serve as the embedded security authority inside the AI Factory, ensuring every AI Agent and GenAI product is safe, hardened and approved before it reaches production.

Purpose of the Role

The AI Cyber Security Specialist safeguards the entire lifecycle of our GenAI and agentic systems. You will define secure architecture patterns for AI Agents, validate Microsoft and OpenAI integrations, embed guardrails, defend against attacks specific to LLMs and agent orchestration, and enforce Group Cyber Security compliance at every stage.

Key Accountabilities:

Secure Architecture & Design for GenAI and Agentic Systems

Define secure-by-design patterns for AI Agent architectures across LangGraph, Microsoft Copilot Studio, Azure OpenAI, OpenAI, Anthropic, vector databases and agent orchestration frameworks.

Ensure identity, access control, encryption and secret management align with Azure best practice (Key Vault, Managed Identity, VNet integration).

Validate cloud service usage across Azure and AWS in line with Group standards.

Ensure that RAG pipelines, tool-use APIs, memory systems and multi-agent workflows follow robust security controls.

Threat Modelling for AI Agents & LLM Pipelines

Conduct threat modelling specific to AI Agents, including:

Prompt injection and cross-agent contamination

Tool misuse and unauthorised tool execution

Hallucination-driven automation risks

Model inversion, data leakage and supply-chain vulnerabilities

Misalignment within agent orchestration flows

Lead security risk assessments and present clear, defensible risk positions to AI Factory leadership and Group Cyber.

Compliance & Policy Alignment

Ensure full alignment with Group Cyber Security, Responsible AI, data protection, cloud governance, and model usage guidelines.

Interpret Microsoft Responsible AI policy requirements and ensure they are applied across Azure OpenAI and Copilot integrations.

Provide comprehensive input into DPIAs, RAIs and security design reviews.

Security Assurance, Testing & Hardening

Oversee adversarial testing, red teaming, LLM jailbreak testing and agent-specific abuse scenarios.

Validate model access controls, content filtering, logging, auditability and end-to-end traceability of agent actions.

Approve production readiness for all GenAI and agentic deployments from a security standpoint.

Operational Security & Monitoring

Work with Cloud, SOC, Engineering and Product teams to define monitoring of:

Agent actions and tool invocation

Unexpected behaviours in orchestration flows

Data exfiltration

Compromised prompts or instructions

Insider misuse or misconfiguration

Ensure incident response playbooks include AI-specific edge cases and escalation paths into Group Cyber.

Cross-Team Alignment & Governance

Act as the security bridge between the AI Factory and Group Cyber Security.

Attend governance forums across both organisations, providing visibility of risks, issues and standards.

Ensure delivery teams apply security controls consistently across all AI Agent and GenAI products.

Training, Guidance & Best Practice

Train engineering and product teams on secure agent engineering, Microsoft GenAI security, LLM threat patterns and cyber expectations.

Build reusable templates and reference architectures for secure AI Agent implementation.

Provide hands-on support to unblock security design challenges.

Continuous Improvement

Stay ahead of emerging threats and attack vectors specific to Microsoft OpenAI, multi-agent orchestration, advanced RAG, and runtime autonomy.

Feed lessons learned back into Group Cyber and contribute to enterprise-wide AI security standards.

Improve security assurance processes, tooling and automated checks across the lifecycle

Qualifications & Skills

Strong cyber security background with experience embedded in engineering or platform teams.

Hands-on expertise securing GenAI or AI Agent systems in complex enterprise environments.

Deep experience with Microsoft Azure, Azure OpenAI, Microsoft Copilot, AKS, VNet security, and cloud hardening.

Familiarity with GenAI technologies including OpenAI, Anthropic, LangChain/LangGraph, vector databases, retrieval pipelines and agent orchestration.

Strong grasp of AI-specific security threats such as prompt injection, agent tool misuse, model inversion, jailbreaks and content manipulation.

Comfortable navigating enterprise cyber governance, risk frameworks and assurance cycles.

Excellent communicator able to explain risk simply and gain alignment across senior stakeholders.

Industry certifications a plus: CISSP, CCSP, GIAC, OSCP, or specialised AI/ML security credentials.

Why Join Us

This role places you at the centre of our clients next generation of AI capabilities. You will shape how secure, responsible and compliant GenAI and agentic systems are built across multiple airlines.

You will have a direct impact on the safe deployment of AI Agents that will operate at scale across the Group, while influencing policy, architecture and standards both within the AI Factory and Group Cyber Security.

What next?

If you are interested in this role, click ‘apply now’ to forward an up-to-date copy of your CV, or call us now on 0116 261 5001.

Job Details

Company
Hays
Location
London, UK
Posted