Computer Vision & Machine Learning - Face Recognition Lead

About the Company

You will own the machine learning and computer vision components of the YEO CFR SDK. This means the facial recognition model, the three-layer passive liveness detection system (software depth analysis, pixelation detection, rPPG), anti-spoofing performance, and injection attack countermeasures. You will work alongside the iOS lead (Swift/CoreML), Android lead (Kotlin/C++/MediaPipe), and Desktop lead (Kotlin Multiplatform) to optimise and deploy models across all platforms. The proprietary YEO facial recognition codebase is cross-platform portable and in production QA now. Your job is to take it from ‘working’ to ‘certified, benchmarked, and bank-ready.

About the Role

Specific responsibilities:

Responsibilities

  • Liveness detection: Own and improve the three-layer passive liveness stack. Software-based depth analysis for 3D facial structure detection. Pixelation detection for screen/print artefact identification. rPPG (remote photoplethysmography) for live pulse signal extraction from standard camera feeds.
  • Anti-spoofing: Design and maintain the model pipeline to detect and reject presentation attacks: photographs, screen displays, video replays, 3D silicone masks, latex masks, and AI-generated deepfakes. Prepare the system for ISO 30107-3 PAD Level 2 certification (iBeta) and eventual Level 3.
  • Injection attack detection: Build countermeasures against virtual camera injection, emulator attacks, app hooking, and synthetic video stream insertion. Implement camera integrity verification, device attestation, and runtime environment checks aligned with CEN/TS 18099.
  • On-device model optimisation: Ensure all models run at inference speeds below 20ms per frame on mid-range devices (3+ years old), with battery impact below 3%, while maintaining false acceptance rate (FAR) and false rejection rate (FRR) at production-grade thresholds.
  • Cross-platform deployment: Convert and optimise models for CoreML (iOS), TFLite (Android), and ONNX Runtime or equivalent (Desktop). Manage the differences in inference performance across runtimes.
  • Benchmarking: Establish the performance benchmarking pipeline. Verification speed, battery impact, FAR, FRR, liveness detection accuracy, anti-spoof detection rates. Maintain benchmarks across every release.
  • Certification support: Prepare the system for ISO 30107-3 PAD testing (iBeta), FIDO Face Verification, and CEN/TS 18099 IAD evaluation. Understand what the testing labs test, design the training and evaluation pipeline accordingly, and manage the certification process.
  • rPPG pipeline: This is the single most technically differentiated component. You will own the rPPG signal extraction pipeline — bandpass filtering, chrominance-based methods (POS, CHROM, or equivalent), pulse signal estimation from standard RGB camera input. The objective: reliably detect a live physiological pulse signal in variable lighting, at varying distances, on devices with no depth sensor, at frame rates as low as 15fps.

Qualifications

  • Tools & proficiency ML Frameworks: PyTorch (expert — primary training), TFLite (expert — Android/iOS inference), CoreML (strong — iOS Neural Engine), ONNX (strong — cross-platform interop), MediaPipe (competent — Android face landmarker integration).
  • Computer Vision & Signal Processing: OpenCV (expert), NumPy/SciPy (expert — rPPG signal processing), dlib (competent), rPPG methods specifically (POS, CHROM, chrominance decomposition, bandpass filtering, pulse extraction from RGB video), monocular depth estimation for Android devices without hardware depth sensors.
  • Model Optimisation & Edge Deployment: Quantisation — PTQ and QAT, int8/float16/mixed precision (expert). Model pruning and knowledge distillation (strong). Mobile architectures — MobileNet, EfficientNet, ShuffleNet, GhostNet (competent). Profiling — Xcode Instruments, Android Studio Profiler, TFLite Benchmark Tool (expert). Hardware acceleration — Apple Neural Engine, NNAPI, GPU delegates, Qualcomm Hexagon DSP (strong).
  • Anti-Spoofing & Security: PAD datasets — CASIA-FASD, Replay-Attack, OULU-NPU, SiW, CelebA-Spoof (strong). Deepfake detection awareness. Injection attack countermeasures — virtual camera detection, emulator detection, SafetyNet/Play Integrity, DeviceCheck/App Attest (competent). ISO 30107-3 and CEN/TS 18099 understanding.
  • Programming: Python (expert), C++ for inference (strong — Android NDK), Swift and Kotlin awareness (basic — enough to understand integration surface), Git/CI/CD, Docker, MLflow/W&B.

Required Skills

Expertise in machine learning frameworks, computer vision, signal processing, model optimisation, and programming languages as detailed above.

Preferred Skills

Experience with anti-spoofing techniques, deepfake detection, and familiarity with certification processes.

Pay range and compensation package

Details on pay range or salary or compensation will be provided during the interview process.

Equal Opportunity Statement

We are committed to diversity and inclusivity in our hiring practices.

Job Details

Company
YEO Messaging
Location
United Kingdom
Posted