Permanent Data Engineering Jobs in West End of London

2 of 2 Permanent Data Engineering Jobs in West End of London

Senior Software Engineer | Python | Fully Remote

Central London / West End, London, United Kingdom
Hybrid / WFH Options
Wilson Brown
Senior Software Engineer - Backend & Data | Fully Remote (UK) Founding Engineer | Python | GCP | Terraform | Event-Based Systems | Data Pipelines An innovative, mission-driven LegalTech SaaS start-up is looking for a Senior Product Engineer (Backend & Data) with deep experience in Python, Product development, and Data engineering. Role Information: Salary : Up to £100,000 (DOE) + Equity Location … only) Stack : Python , TypeScript, GCP, Pub/Sub, SQL & NoSQL, IaC (Terraform), CI/CD (GitHub Actions), Observability tools, AI tooling You’ll join a remote first, high-trust engineering team working with a modern, cloud-native stack - with real influence over technical decisions from day one. You’ll take technical ownership of the backend to ensure it’s … robust, scalable, and ready for real customers, while adding new Data-driven features, optimising performance, and shaping the long-term Product roadmap. While your focus will be backend systems in Python, you’ll also work across the stack, collaborate directly with users, and bring a strong Product mindset to every decision. This role is ideal for someone who thrives More ❯
Posted:

Full Stack AI Software Engineer - Full Remote - UK

Central London / West End, London, United Kingdom
Hybrid / WFH Options
Ada Meher
making process, with the founders having had first hand experience of the lengthiness and challenges of deal processes in the past. Their stack spans across back end, front end, data and crucially AI. This is a great opportunity for an entrepreneurial software engineer who wants to play a part in shaping the technical vision of this business and work … on their product from an early stage. What you'll work on: Backend APIs (Python/FastAPI): Build and maintain secure, high-performance services that drive AI features and data access at scale. RAG & vector search: Design and improve retrieval pipelines (embeddings, chunking, hybrid search, ranking, feedback loops), owning schema design, latency, and relevance across vector databases. LLM integration … Connect and orchestrate large language models (OpenAI, Bedrock, etc.), manage prompts, tools, safeguards, and evaluation. Data pipelines: Ingest, clean, and transform structured and unstructured data; design efficient schemas (Postgres/NoSQL) for search and analytics. Frontend (React/Next.js): Deliver user-friendly, performant UIs that make AI-powered features (search, filters, explanations, citations) clear and accessible. Architecture: Shape More ❯
Posted: