4 of 4 Contract Red Team Jobs in London

AI Safety Researcher - London (Fully Remote)

Hiring Organisation
Randstad Technologies Recruitment
Location
London, United Kingdom
Employment Type
Contract
Contract Rate
£480 - £725/day
PAYE £550 - £610 per day on RUPAYE £650 - £725 per day INSIDE IR35 Umbrella What You'll Do Red-Teaming: Lead adversarial campaigns to identify system gaps using automated frameworks and LLM-as-a-judge. System Alignment: Use Preference Tuning, automatic prompt optimization, and context engineering to align ...

Microsoft Defender Engineer

Hiring Organisation
Experis
Location
City of London, London, United Kingdom
Employment Type
Contract
Contract Rate
£500 - £550 per day
rules, AV baselines, and KQL analytics. Desirable Skills Experience with Microsoft Sentinel. Understanding of MITRE ATT and CK. Exposure to red team activities. Familiarity with automation using PowerShell. Professional Attributes Analytical thinker. Resilient and proactive. Strong communicator. Collaborative mindset. Qualifications Microsoft security certifications such ...

AI Safety Researcher

Hiring Organisation
TEKsystems
Location
London, United Kingdom
Employment Type
Contract
Contract Rate
GBP Annual
teams to run adversarial tests, improve system alignment, and produce high-quality training and evaluation data. Responsibilities Include: Lead adversarial testing/red-teaming to identify safety gaps. Work with LLM evaluation tools, automated red-teaming frameworks, and large datasets. Improve system alignment through prompt engineering ...

AI Engineer

Hiring Organisation
Lorien
Location
London, South East, England, United Kingdom
Employment Type
Contractor
Contract Rate
Salary negotiable
Wide - 2 days a week on site. Financial Services Lorien's leading banking client is looking for an additional AI Engineer join the existing team on a expanding project. This role will be shipping production-grade GenAI features (e.g., retrieval-augmented generation, assistants, summarisation) that are safe, reliable … integration patterns (prompt engineering, RAG, tool-use/agents, function calling). Evaluation & safety: automated prompt/response evals, LLM-as-a-judge, red-teaming, content filters, guardrails. LLMOps practices: experiment tracking, prompt/version control, offline/online evals. Desirable: Doc processing (OCR, chunking), vector DBs (OpenSearch ...