AI Engineer
We’re hiring three senior hands‐on specialists to secure and assure next‐generation AI/ML and agentic systems in a regulated UK financial services environment. You’ll sit between security, engineering, risk and compliance, shaping how AI is designed, tested and run in production.
Common Requirements (All Roles)
- Strong UK financial services background; familiar with DORA, FCA Operational Resilience, EU AI Act.
- Practical experience with AWS Bedrock (Agents, Knowledge Bases, Guardrails, model lifecycle).
- Solid AI/ML fundamentals: FMs, RAG, non‐deterministic agents, tool use.
- Secure AI knowledge: OWASP LLM Top 10, agentic AI threats; NIST AI RMF exposure preferred.
- Able to work across security, engineering and risk; clear written and verbal communication.
Role 1 – Identity Expert (Agentic & Machine Identity)
- Lead SPIFFE/SPIRE rollout and integration with AWS (IAM Roles Anywhere, STS Tags).
- Implement sender‐constrained tokens (PoP) and harden OBO flows (claim validation, short‐lived creds, JIT for non‐human identities).
- Enhance SOC playbooks for identity‐based agent threats (Confused Deputy, Federation Hijack).
- Ensure traceability of agent actions to human identity (EU AI Act Articles 12 & 14).
Role 2 – Threat & Adversarial AI Expert
- Lead AI threat modelling (STRIDE for AI, OWASP LLM/Agentic, attack trees).
- Maintain priority threat scenarios (Prompt Injection, Sleeper Agents, Denial‐of‐Wallet).
- Translate threats into adversarial test cases and run scenario workshops.
- Expand a safeguard catalogue and maintain an adversarial AI knowledge base.
Role 3 – AI Evals & Red Teaming Expert
- Implement automated adversarial testing in CI/CD (e.g. AgentDojo, Garak, Pyrit) with release gates.
- Build metrics & measurement for success rates, uncertainty, drift.
- Map threats to tests and support EU AI Act evidence (Art. 15).
- Own the AI‐BOM and testing for bias, hallucination, memorisation across all agentic systems.