Intern Prompt Engineer
About CleverChain
CleverChain is a growth-stage RegTech company, an award-winning KYB platform recognised by Chartis Research for Best Know Your Business (KYB) and by Datos Insights for Best KYC/KYB Innovation. Our cloud-based platform automates compliance processes, streamlining customer onboarding and risk monitoring for financial institutions, fintechs, payment providers, and any organisation that needs to verify and manage their customers. We're at a pivotal moment of growth and looking to make critical hires that help us get there
Role Overview
We're looking for a curious, detail-oriented Prompt Engineer to join our team on a paid internship basis. You'll play a hands-on role in shaping how large language models behave across our product, crafting system prompts, defining structured output schemas, and optimising LLM interactions within automated workflows.
This isn't a "watch and learn" internship. You'll be directly responsible for configuring and refining the AI layer of real, production-facing systems. If you spend your evenings tinkering with AI models, benchmarking them against each other, or reading papers about tool-use or even building your own AI solutions, we want to hear from you.
What You'll Be Doing
- Designing and iterating on system prompts that guide LLM behaviour within multi-step automated workflows
- Defining and maintaining structured output schemas (JSON Schema, function calling formats, etc.) to ensure reliable, parseable model responses
- Working with search-grounded LLMs, configuring retrieval and grounding strategies to improve factual accuracy and relevance
- Operating across multiple models and providers (OpenAI, Anthropic, Google, open-source models, and others), understanding their respective strengths, quirks, and trade-offs
- Testing, evaluating, and optimising prompt performance, measuring output quality, latency, token efficiency, and consistency
- Documenting prompt patterns and best practices to build an internal knowledge base the wider team can learn from
Skills & Experience
Must-Haves
- Demonstrable experience with prompt engineering — whether professional, academic, or through serious personal projects. Show us what you've built or figured out.
- Strong understanding of LLM request optimisation — token management, context window strategy, temperature/sampling tuning, and knowing when a problem is a prompt problem vs. a model problem
- Hands-on experience with structured outputs — you understand how to coerce a model into returning clean, schema-compliant responses and how to handle it when it doesn't
- Familiarity with multiple LLM providers and models — you know the landscape and can articulate why you'd pick one model over another for a given task
- Excellent written communication — prompt engineering is, at its core, a writing discipline. Precision and clarity in natural language are non-negotiable.
- A genuine passion for LLMs and agentic AI — you follow the space closely, you have opinions, and you're excited about where it's heading
Nice-to-Haves
- Experience with workflow automation platforms (e.g., N8N, Make, Zapier, or similar tools)
- Familiarity with agentic AI patterns — tool use, planning, multi-step reasoning, agent loops
- Exposure to evaluation frameworks or techniques for assessing LLM output quality
- Basic programming ability (JavaScript) — enough to read and understand code, even if you're not writing it daily
The Kind of Person Who Thrives Here
- You're self-directed. You don't wait to be told what to try next — you hypothesise, test, and report back.
- You're obsessively iterative. A prompt that "mostly works" isn't good enough. You refine until it's robust.
- You communicate clearly. You can explain to a non-technical stakeholder why a prompt behaves the way it does and what the trade-offs are.
- You're adaptable. The models change, the best practices change, the landscape shifts every few weeks. You keep up, and you enjoy it.
- You think in systems, not just single interactions. You understand that a prompt exists within a broader workflow and that upstream and downstream context matters.
What We Offer
- Shape the AI Layer from the Ground Up: You'll be joining early and having a direct hand in defining how LLMs behave across our product. The prompt patterns, schemas, and conventions you establish will become the foundation others build on.
- Work on Interesting Problems: Search-grounded generation, multi-model orchestration, structured output reliability, agentic workflows - the challenges here are real, varied, and at the cutting edge of applied AI. This isn't busywork.
- Exposure to a Production AI Stack: You'll be configuring and optimising LLM interactions in real, shipped software — not a sandbox, not a side project. What you build will be used.
- Direct Impact: In a small team, your work ships and matters immediately. A prompt you refine on Tuesday could be in production by Wednesday. You'll see the results of your contributions every day.
- Flexible Hours: We have core hours for alignment, but we trust you to manage your time. Get the work done, pick the schedule that works for you.
- A Culture of Trust and Collaboration: Open and frequent communication, mutual respect, and a shared commitment to building something that matters. We'll treat you as a teammate, not "the intern."
- Room to Grow: This is a paid internship, but we invest in people, not just positions. Promising candidates will have genuine opportunities for self-development, mentorship, and skill-building - and outstanding performance may lead to further roles within the company as we scale.
Application process
Attach your CV to the LinkedIn application and a brief note about what caught your interest. If you have a GitHub profile, side project, or writing that shows how you think about software, we'd love to see it, but it's not required.
Our process is a conversation about your experience, a practical technical discussion about real problems.