AI Researcher

AI Researcher (Guardrails & Responsible AI)

Hybrid Working - Edinburgh OR London - 2 days a week on site.

Financial Services

Lorien's leading banking client is looking for an AI Researcher-a curious, high-end thinker (ideal for a recent Master's or PhD graduate) who is passionate about responsible AI, agentic systems, and the science behind guardrail effectiveness. This role sits at the intersection of research, model development, and deep validation, contributing to safety frameworks that directly shape the bank's AI strategy.

The client is advancing its next generation of AI capabilities-and we're committed to making them safe, explainable, and trusted. We're building cutting-edge guardrail technologies to ensure AI behaves reliably across text, voice, and even emerging multimodal systems.

This role is based in Edinburgh OR London.

This role will be Via Umbrella.

Working in a Hybrid Model of 2 days a week on site.

What You'll Do

As an AI Research Engineer, you will be focused on designing and building AI and Generative AI guardrails to help the safe development and deployment of cutting-edge multi-modal AI Systems and help productization of these technologies.

  • Investigate cutting-edge methods in AI safety, guardrails, alignment, agentic behaviour, and safe model interaction patterns.
  • Conduct research into:
    • Unintended behaviours and emergent risks
    • Multimodal model vulnerabilities
    • Robustness, uncertainty, and adversarial resilience
    • Interpretability and explanation techniques
  • Explore state-of-the-art methods across LLMs, vision-language models, speech models, and emerging agent systems.
  • Monitor research trends, benchmarks, and global developments in AI governance, AI risk, and safety engineering.
  • Develop prototype safety mechanisms, guardrails, and evaluation tools across text, audio, and video modalities.
  • Build and test:
    • Prompt-level guardrails.
    • Safety classifiers
    • Behaviour-shaping or reward-modelling components.
    • LLM and multimodal fine-tunes
    • Adversarial robustness defences
  • Use Python and modern ML frameworks (e.g., PyTorch, TensorFlow, JAX, HuggingFace).
  • Contribute to creation of synthetic datasets, adversarial evaluation corpora, and scenario-based test sets.
  • Help transition research outputs into scalable controls for engineering teams to integrate.
  • Design, run, and document high-depth validation experiments to measure guardrail effectiveness.
  • Conduct multimodal red-teaming, stress testing, and failure-mode exploration.
  • Build automated testing and model evaluation pipelines:
    • Safety benchmarks (toxicity, bias, hallucination, jailbreak susceptibility)
    • Multimodal evaluation (vision consistency, audio hallucination, cross-modal attacks)
    • Scoring and calibration analysis
  • Support development of model risk metrics and safety dashboards.
  • Apply frameworks such as HELM, Holistic Evaluation, or bespoke NatWest-specific evaluation patterns.
  • Help validate controls that ensure AI systems meet NatWest's responsible AI standards.
  • Work closely with engineers, safety SMEs, and governance teams.
  • Produce high-quality research insights to guide product and platform direction.

Key Skills and Experience

  • Strong Python programming skills and foundations in machine learning, LLMs, or multimodal AI.
  • Understanding of ML concepts such as training, fine-tuning, optimisation, evaluation, and model drift.
  • Experience building or adapting ML models (open-source or proprietary).
  • Ability to design structured experiments and interpret model behaviour through metrics and analysis.
  • Curiosity for emerging topics in AI alignment, agent behaviour, safety engineering, and interpretability.
  • Good grasp of core Responsible AI concepts:
    • Bias and fairness
    • Explainability
    • Privacy-preserving ML.
    • Robustness and uncertainty

Nice to have:

  • Experience with ML frameworks: PyTorch, TensorFlow, Flax/JAX, HuggingFace.
  • Exposure to multimodal models (CLIP, Whisper, LLaVA, video transformers).
  • Familiarity with safety benchmarks, adversarial testing, red teaming, or uncertainty estimation.
  • Knowledge of AI governance, risk frameworks, or industry standards (e.g., NIST AI RMF, ISO/IEC 42001).
  • Experience with synthetic data generation or test corpus construction.
  • Familiarity with experiment tracking tools (Comet/Opik,, MLflow, SageMaker Experiments).
  • Interest in governance, risk, or AI assurance

IND_PC3

Guidant, Carbon60, Lorien & SRG - The Impellam Group Portfolio are acting as an Employment Business in relation to this vacancy.

Job Details

Company
Lorien
Location
City of London, London, England, United Kingdom
Hybrid / Remote Options
Employment Type
Contractor
Salary
Salary negotiable
Posted