Founding Researcher
London or New York, partially on-site
AI will write most of the world's code. The interesting bottleneck is no longer whether a model can produce code; it is whether anyone should trust it. We are building the trust stack for AI code generation, targeted at high-stakes computing where wrong numbers cost real money.
A statically typed language designed for coding models. A compiler whose guarantees double as audit-grade trust infrastructure. A coding model fine-tuned on the language, with the compiler emitting the continuous fitness signal that becomes its training reward. Language and model are co-designed.
We are a small team out of PyTorch, FAIR, NYU, & KCL, with strong institutional VC support from our pre-seed and active investor interest heading into seed.
What you will do
Own the research agenda across the language and the model. Design type systems and compiler analyses that yield useful fitness gradients. Fine-tune coding models on a language with no pretraining footprint, including the supervised and reinforcement stages that build competence from zero. Publish in PL and ML venues. Choose the next experiments. Help shape what verified computing looks like in the wild.
Who you are
You publish at top venues in some combination of machine learning, programming languages, or formal methods. People who cross between them are who we want most. PhD preferred, equivalent output equally fine.
You can talk fluently about both training dynamics and type systems. You do not need expertise in both, but you need real curiosity about whichever one you arrive without.
You write production code. Our stack is Rust and Python.
You are comfortable with early-stage ambiguity. The team is small. You will define your own roadmap and defend it with evidence.
Why join us?
You work directly with founders who have built infrastructure used across the industry. The research is well-scoped, the commercial wedge is real, and the equity reflects how early it is.