AI Risk Practitioner

About governr 

governr is the control and accountability infra for AI in regulated enterprises - the system of record for a firm’s entire AI estate, with 24/7 risk exposure management and regulatory adherence. We were founded by operators who’ve built risk and control stacks for high frequency trading, volatility and tail-risk management applied to non-deterministic AI at scale. Our advisory panel includes formal global heads of capital markets, of risk management, of AI deployment and country level leadership of critical security infrastructures. We know business, risk and technology from the inside. 

The moment 

Every regulated firm on earth is required - by their board, their regulator, and their insurer - to prove they have control over their AI. Most cannot.

The EU AI Act, the FCA’s PS26/2, FINRA, MAS, SEC, Freddie Mac, HIPAA, MHRA, MOD, DORA, GDPR 22, Consumer Duty, FRC, SMCR - every regulatory jurisdiction across industry segments demands this. Makes AI accountability personal. Insurers are pulling cover.

This is the Wiz moment for AI - the control and context gap of AI turns into a category-defining platform. We have the business risk tailwind, the regulatory tailwind, the team and a 12-18 month technical lead built over years of work inside these firms. The window to define this category is open now, and it won’t be open for long. 

The product 

governr combines 60 AI-specific risk factors, 80+ agent-level controls, and 850+ regulatory mappings into a single control panel. Every AI system, model, agent, and supplier in a firm’s estate - risk managed, monitored, and auditable with an AI-native automation. It is the future language of how AI deployment, trust and risk is answerable in minutes, not months. It is the first and only platform purpose-built for the AI control problem regulated enterprises now face. 

Practical experience:

  • Worked in or around AI/ML teams in a delivery or assurance capacity. You may not have been necessarily a builder, but someone who has been close enough to data pipelines, model deployment, and API layers to understand what "done" looks like.
  • A background in technology risk, ML engineering, or technical consulting with AI exposure all work. Three to five years.
  • You'll have seen enough real AI estates to recognise what Gap and Weak actually look like in practice, because a lot of the value in a guidance engagement is pattern recognition: spotting that a firm's "data quality process" is actually just a Notion doc someone wrote once, or that their "rate limiting" is a single nginx config that nobody has reviewed.

Specific things you would be able to do

  • Read and interpret a data pipeline well enough to identify where lineage tracking, PII scanning, or quality checks are missing.
  • Review model evaluation results and know whether HELM benchmarks have been run properly or just ticked off.
  • Assess whether an agent's system prompt actually enforces the iteration limits and tool restrictions the Baseline requires.
  • Look at an API auth setup and know whether OAuth2 has been implemented correctly or just partially. Identify whether logging is genuinely structured and centralised or just console output someone has called "logs."
  • You don't need to build any of these things. If you do, that's a bonus. You need to know what a properly implemented version looks like versus a superficial one.

Useful background

  • You've held a role that required reviewing or auditing technical AI or ML systems, not just advising on strategy.
  • You've written technical risk assessments or assurance reports that engineering teams found credible and useful, not just governance teams.
  • You are comfortable facilitating workshops with mixed technical and non-technical audiences. Familiar with at least one cloud provider's data and ML tooling stack well enough to know where common gaps appear. Some exposure to EU AI Act, GDPR, or NIST AI RMF is useful but not essential at Baseline.
  • Governance and policy - you may have written operational policies and procedures that actually got used, not just filed. Be it retention policy, Data protection impact assessments or an AI governance framework which went through legal or regulatory scrutiny and held up
  • Background in technology governance, data protection, compliance, or risk management.

What you get 

  • Meaningful equity. A salary competitive for the stage.
  • Direct founder exposure: You will work closely with the people building the company and have a real seat at the table with senior industry leading AI, risk, data and technology practitioners and researchers in agentic, security management, context graphs
  • Real responsibility early: This is a chance to take ownership, not just observe
  • High-value customer exposure: You will interact with serious, senior stakeholders at important firms
  • Career-building topic area: AI risk and governance is one of the defining enterprise issues of this era
  • Front-row seat to building a category: You will see how a category gets built and business buildout
  • Varied work: The role cuts across customer success, GTM, operations, and market-facing activity, so there is room to learn fast

How to apply

  • Send a short note to rajen@governr.ai explaining, in your own words, why this role and why now. No cover letter template. No CV tricks. Tell us your experience. Be ready to talk over a 20-min coffee or a call over the weekend or in the evening

Job Details

Company
governr
Location
City of London, Greater London, UK
Posted