AI Deployment & Platform Engineer

LEC AI | Central London (Knightsbridge) | Full-Time | 5 Days On-Site

About LEC AI

LEC AI is the intelligence systems division inside London Export Corporation, a London-headquartered group operating across AI, forecasting, robotics, logistics, commerce, and operational software.

We are at a high-growth stage. We are scaling from founding team to production AI systems deployed across multiple businesses. We operate with the speed and ownership of an early-stage startup, backed by an established UK trading group with a 70-year operating history. Startup energy without the funding-cycle risk.

We build production AI systems that operate inside real businesses, not demos or isolated copilots. Our systems already run in live environments and are expanding into organisational intelligence, structured-data prediction, multi-agent infrastructure, and SaaS products deployed across multiple businesses.

This role focuses on the deployment, reliability, scaling, and operational infrastructure underneath those systems.

The Role

We are hiring an AI Deployment & Platform Engineer to build and operate the infrastructure layer powering our AI systems in production.

You will work directly with the AI systems engineering team to deploy AI systems into live environments, manage runtime infrastructure, scale orchestration systems, optimise inference performance, and build the deployment pipelines and observability that keep everything running.

This is a deeply hands-on engineering role for someone who enjoys building production infrastructure, solving operational problems, and making AI systems reliable at scale.

What You Will Build

Deployment Infrastructure

• Deploy and manage AI systems primarily across AWS and Azure, with Alibaba Cloud for China-based deployments and GCP as workloads require

• Containerise and orchestrate AI workloads at scale

• Build CI/CD pipelines for AI systems and model deployments

• Manage inference infrastructure and deployment automation

• Design scalable runtime environments for multi-agent systems

Reliability and Scaling

• Monitor system performance, latency, throughput, and uptime

• Build observability, logging, and alerting systems

• Manage autoscaling and infrastructure optimisation

• Debug production failures and runtime bottlenecks

Infrastructure Operations

• Monitor model drift, data drift, and runtime quality degradation

• Implement rollback, failover, and deployment safety systems

• Manage GPU infrastructure and workload scheduling

• Optimise model serving costs and cloud spend

You will support deployment and operations for organisational intelligence platforms, large-scale prediction systems, multi-agent workflows, multimodal AI systems, and future AI-native SaaS products.

Who You Are

You have 3+ years of experience operating production infrastructure under real-world conditions. You are highly hands-on and comfortable owning systems directly. You understand that AI systems are operational systems, and that reliability, latency, observability, and cost control matter as much as model quality.

You write production code regularly. Python is expected.

Strong experience across the following is highly valuable:

• containerisation and orchestration

• major cloud platforms (AWS, Azure)

• infrastructure-as-code

• backend API frameworks

• caching layers and in-memory data stores

• relational and vector databases

• workflow orchestration

• CI/CD pipelines

• GPU infrastructure

• monitoring and observability stacks

A strong plus:

• inference optimisation

• model serving runtimes

• async and streaming systems

• MLOps tooling

• multi-agent systems

• Alibaba Cloud or other China cloud providers

Strong Signals

• Built and operated AI systems in production

• Managed cloud infrastructure at scale

• Reduced infrastructure cost or inference latency significantly

• Built deployment automation pipelines

• Worked on real-time or high-throughput systems

• Strong debugging and systems instincts

• Comfortable in fast-moving environments with high ownership

• Mandarin is a bonus given our UK and China operations

Why This Role Is Different

This is a founding-stage infrastructure hire inside a high-growth AI division. The systems you deploy will run inside active businesses with real operational impact, not pilots that get shelved.

You will work on multi-agent systems, orchestration runtimes, large-scale prediction systems, and real-time AI deployment.

Small team. Fast execution. Minimal bureaucracy. Good ideas move quickly into production. Your work is visible to the leadership team and shapes the platform from day one.

We are building infrastructure designed to compound over time: memory, operational intelligence, orchestration, and reusable AI platforms. The person in this role will own a large slice of how those systems run in production.

Location and Eligibility

Based in Central London (Knightsbridge). Full-time, 5 days on-site.

Salary: £40,000 to £60,000 GBP, depending on experience, with significant upside as LEC AI scales.

We are unable to provide visa sponsorship for this role. Applicants must have the right to work in the UK.

How to Apply

Apply on LinkedIn and email your portfolio to talent@lecai.ai with the subject line:

AI Deployment & Platform Engineer

Show us systems you have deployed, infrastructure you have operated, CI/CD pipelines you have built, GitHub, and any debugging or scaling problems you have solved.

Job Details

Company
LEC AI
Location
London, England, United Kingdom
Posted