OpenAI Architect (FDE)

Role: OpenAI Architect (FDE)

Role Summary

Lead the architecture and productionisation of OpenAI‑first solutions in a forward‑deployed model. You will embed with customers to design secure, scalable patterns around ChatGPT Enterprise rollout and administration (SSO/SCIM/RBAC, data controls), OpenAI API endpoints (Assistants & tool/function calling, Responses/Chat Completions, Embeddings, Files/Batch, Moderations), fine‑tuning pipelines, and agentic RAG then drive PoC → Production with governance, observability, and cost control. Keep solutions portable with pragmatic use of cloud services, LangChain/LangGraph/Semantic Kernel, and standard vector stores.

What you’ll do

OpenAI delivery

• ChatGPT Enterprise deployment & governance: Plan workspaces; implement SSO/SCIM, role models and policy guardrails; set up usage analytics; define custom GPTs governance (actions/connectors, approvals) and runbooks.

• OpenAI API architecture: Design patterns for Assistants with multi‑tool orchestration, structured outputs (JSON schemas), function/tool calling, Files/Batch for bulk jobs, Moderations, and Embeddings for retrieval.

• RAG & evaluation: Stand up OpenAI‑centric RAG (chunking, embeddings, indexing), implement groundedness checks, prompt test suites, red‑teaming, and cost/perf SLOs.

• Fine‑tuning lifecycle: Own dataset curation, training/eval, bias checks, rollback/versioning, and telemetry for tuned models.

• Operability: Add observability (OpenAI Observability/OpenTelemetry), token/cost telemetry, retries/backoff, idempotency, and feature flags/canaries; document runbooks and SOPs.

Cross‑platform & enterprise integration

• Azure, and/or AWS or GCP& identity/networking: Design with Managed Identity, Secret Management, and optional Private Link/private endpoints; harden per enterprise controls.

• Frameworks & vector stores: Apply OpenAI Agent SDK, LangChain/LangGraph or Semantic Kernel where useful; integrate Azure Cognitive Search, Redis/pgvector, or managed vector services.

• Copilot/Graph & app embeds: Where valuable, integrate via Copilot Studio and Microsoft Graph; wire assistants into Teams/SharePoint and line‑of‑business apps.

• Delivery engineering: Enforce CI/CD (GitHub Actions/Azure DevOps) and IaC (Terraform/Bicep); support multi‑region rollout strategies.

Minimum Qualifications

• 7+ years in software/solution architecture; strong Python plus one of Java/TypeScript.

• Proven delivery on OpenAI/Azure OpenAI (Assistants/tool calling, RAG, eval/safety, observability) and enterprise deployments (auth, policies, cost).

• Hands‑on CI/CD and IaC; excellent customer‑facing communication.

Preferred (mix‑and‑match)

• ChatGPT Enterprise administration (SSO/SCIM/RBAC), custom GPTs governance, usage analytics.

• Fine‑tuning (dataset QA, training/eval pipelines, regression testing).

• RAG stacks (Azure Cognitive Search, Redis/pgvector), OpenAI Agent SDK, LangChain/LangGraph or Semantic Kernel;

• OpenAI Observability, OpenTelemetry.

• Professional-level cloud certifications

Representative problems you’ll work on

Clinical advisors, shopping assistants, insurance document copilots, ServiceNow analytics, SAP copilots, and utilities/regulatory assistants delivered with agentic orchestration, guardrails, and observability across OpenAI‑first and multi‑vendor contexts.

About HCLTech AI & Cloud Native Labs

We are HCLTech’s global Centre of Excellence guiding advanced‑tech adoption for the world’s largest enterprises—combining strategic advice with accelerated engineering and open‑source leadership (CNCF).

Company
HCLTech
Location
London, UK
Posted
Company
HCLTech
Location
London, UK
Posted