Principal AI Research & Engineering Leader
About Our Team
We are at the forefront of AI innovation, driving the core technical engine that powers our next generation of enterprise products. Our mission is to translate complex, cutting-edge academic research into scalable, reliable, and commercially impactful solutions. We operate with the agility of a startup but with the resources of a global leader, fostering a culture of technical rigor, publication, and groundbreaking work.
The Opportunity
We are seeking an Applied AI Research & Engineering Leader to define the technical foundation and innovation roadmap for our entire AI division. This is a hands-on, high-impact technical leadership role for an individual who excels at merging deep academic research with production-grade engineering . You will be responsible for leading a team of elite researchers and engineers, defining our core ML/AI research agenda, and ensuring the robust, scalable deployment of all novel algorithms into our production environment.
What You Will Lead: Key Responsibilities
1. Technical Vision & Research Strategy
- Set the Technical Roadmap: Define the multi-year research and engineering roadmap for the AI platform, prioritizing projects that solve critical, high-complexity technical challenges (e.g., model efficiency, interpretability, real-time inference).
- Deep Learning & Modeling: Lead the design, implementation, and optimization of advanced deep learning, Generative AI, and Reinforcement Learning models from scratch, pushing the state-of-the-art for our domain.
- Academic Translation: Monitor and translate bleeding-edge academic research and industry trends into rapid prototypes suitable for mission-critical production deployment.
2. MLOps & Production Engineering Excellence
- MLOps Leadership: Establish and enforce industry-leading best practices for MLOps (Machine Learning Operations), ensuring automation, reproducibility, version control, and continuous integration/continuous delivery (CI/CD) for all models.
- Architecture Review: Personally review and approve the technical architecture for all deployed AI systems, ensuring they meet strict criteria for scalability, low latency, and fault tolerance.
- Resource Optimization: Drive research into optimizing computational costs for large models, including strategies for model compression, quantization, and efficient hardware utilization.
3. Team Leadership & Technical Mentorship
- Lead Technical Talent: Recruit, mentor, and manage a high-performing team of Applied AI Scientists, Machine Learning Engineers, and Researchers.
- Culture of Rigor: Foster a technically demanding and research-driven culture, encouraging publication, patent filing, and open-source contributions.
- Code Quality: Ensure all core AI codebases adhere to the highest standards of quality, documentation, and maintainability.
Qualifications, Skills, and Competencies
Required Qualifications
- Experience: 15+ years of hands-on experience in Machine Learning, Deep Learning, or AI Research, with a focus on building and deploying complex models at scale.
- Leadership: 5+ years of technical leadership experience managing a team of Data Scientists and ML Engineers.
- Expertise: Expert-level proficiency in core ML frameworks (e.g., PyTorch, TensorFlow) and data science languages (Python).
- Generative AI: Proven experience building a commercial practice or product focused on Generative AI and Large Language Models (LLMs).
- Domain Breadth: Demonstrated expertise in at least two major AI domains (e.g., Deep Learning, NLP, Computer Vision).
- Impact: Proven track record of success in a client-facing or internal product-focused role, directly translating technical output into commercial results.
- MLOps: Deep practical knowledge of MLOps principles and extensive experience with cloud-native ML services (e.g., Google Cloud Vertex AI, AWS SageMaker, Azure ML).
- Education: Ph.D. or Master's degree in Computer Science, Machine Learning, or a highly quantitative field, OR equivalent demonstrated technical leadership experience.
Preferred Qualifications
- A strong portfolio of research publications (ICML, NeurIPS, KDD, etc.) or patents related to applied AI.
- Extensive experience with distributed computing frameworks (e.g., Spark, Ray) for large-scale model training and inference.