Research Scientist or Engineer

Application Deadline

We're currently considering applications on a rolling basis. It can take multiple weeks until we respond, even if you are a great fit.

About the Opportunity

We're looking for Research Scientists and Research Engineers who are excited to work on safety evaluations, the science of scheming, or control/monitoring for frontier models.

You Will Have The Opportunity To
  • Work with frontier labs like OpenAI, Anthropic, and Google DeepMind by running pre deployment evaluations and collaborating closely on mitigations, seeing our work on anti scheming and OpenAI's o1 preview system card, and Anthropics's Opus 4 and Sonnet 4 system card.
  • Build evaluations for scheming related properties such as deceptive reasoning, sabotage, and deception tendencies. See our conceptual work on evaluation based safety cases for scheming or how scheming could arise.
  • Work on the science of scheming, studying model organisms or real world examples to develop a better theoretical understanding of why models scheme and which components of training and deployment cause it.
  • Automate the entire evals pipeline, aiming to automate substantial parts of evals ideation, generation, running and analysis.
  • Design and evaluate AI control protocols, shifting effort to deployment time monitoring and other control methods for agents with longer horizons.
  • Note: We are not hiring for interpretability roles.
Key Requirements
  • We don't require a formal background or industry experience and welcome self taught candidates.
  • Experience in empirical research related to scheming, AI control and evaluations, and a scientific mindset-you have designed and executed experiments, can identify alternative explanations for findings, and test alternative hypotheses to avoid over interpreting results. This experience can come from academia, industry, or independent research.
  • Track record of excellent scientific writing and communication-you can understand and communicate complex technical concepts to our target audience and synthesize scientific results into coherent narratives.
  • Comprehensive experience in large language model steering and the supporting data science and data engineering skills. Steering can take many forms: prompting; LM agents and scaffolding; fluent LLM usage and integration into your own workflows; experience with supervised fine tuning; and experience with RL on LLMs.
  • Software engineering skills-our entire stack uses Python, and we're looking for candidates with strong software engineering experience.
  • (Bonus) Experience with the Inspect evals framework is valued.
  • Depending on your preferred role and how these characteristics weigh up, we can offer either an RS or RE role.

We want to emphasize that people who feel they don't fulfill all of these characteristics but think they would be a good fit for the position, nonetheless, are strongly encouraged to apply. We believe that excellent candidates can come from a variety of backgrounds and are excited to give you opportunities to shine.

Logistics
  • Start Date: Target of 2 3 months after the first interview.
  • Time Allocation: Full time
  • Location: The office is in London, shared with the London Initiative for Safe AI (LISA). This is an in person role. In rare situations, we may consider partially remote arrangements on a case by case basis.
  • Work Visas: We can sponsor UK visas.
Benefits
  • Salary: 100k - 200k GBP ( 135k - 270k USD)
  • Flexible work hours and schedule
  • Unlimited vacation
  • Unlimited sick leave
  • Lunch, dinner, and snacks are provided for all employees on workdays
  • Paid work trips, including staff retreats, business trips, and relevant conferences
  • A yearly $1,000 (USD) professional development budget
About Apollo Research

The rapid rise in AI capabilities offers tremendous opportunities, but also present significant risks. At Apollo Research, we're primarily concerned with risks from loss of control, i.e. risks coming directly from the model rather than from human misuse. We're particularly concerned with deceptive alignment/scheming, a phenomenon where a model appears aligned but is, in fact, misaligned and capable of evading human oversight. We work on detecting scheming, building evaluations, studying the science of scheming, and developing mitigations. We closely work with multiple frontier AI companies to test their models before deployment or collaborate on scheming mitigations.

About the Team

The current evals team consists of Mikita Balesni, Jérémy Scheurer, Alex Meinke, Rusheb Shah, Bronson Schoen, Andrei Matveiakin, Felix Hofstätter, Axel Højmark, Nix Goldowsky-Dill, Teun van der Weij, and Alex Lloyd. Marius Hobbhahn manages and advises the evals team, though team members lead individual projects. You will mostly work with the evals team, but you will likely sometimes interact with the governance team to translate technical knowledge into concrete recommendations. You can find our full team here.

Equality Statement

Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation.

How to Apply

Please complete the application form with your CV. The provision of a cover letter is optional but not necessary. Please also feel free to share links to relevant work samples.

About the Interview Process

Our multi stage process includes a screening interview, a take home test (approx. 2.5 hours), three technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no LeetCode style general coding interviews. If you want to prepare for the interviews, we suggest working on hands on LLM evals projects, such as building LM agent evaluations in Inspect.

Applications Deadline

We are accepting applications until 31 October 2025. We encourage early submissions and will start interviews in early October.

Your Privacy and Fairness in Our Recruitment Process

We are committed to protecting your data, ensuring fairness, and adhering to workplace fairness principles in our recruitment process. To enhance hiring efficiency, we use AI powered tools to assist with tasks such as resume screening. These tools are designed and deployed in compliance with internationally recognized AI governance frameworks. Your personal data is handled securely and transparently. We adopt a human centred approach: all resumes are screened by a human and final hiring decisions are made by our team. If you have questions about how your data is processed or wish to report concerns about fairness, please contact us at .

Company
COL Limited
Location
London, United Kingdom
Employment Type
Permanent
Salary
GBP Annual
Posted
Company
COL Limited
Location
London, United Kingdom
Employment Type
Permanent
Salary
GBP Annual
Posted