We're seeking a highly skilled and motivated Senior Data Engineer to join our growing data team. In this role, you'll architect and maintain robust, scalable datapipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and … Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion, transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data … datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate data workflows and processes to improve efficiency and reduce manual intervention - Monitor pipeline performance, identify bottlenecks, and resolve issues proactively - Apply best practices in CI/CD, version control (e.g., Git), and infrastructure-as-code (e.g., Terraform, CloudFormation) - Enforce dataMore ❯
Innovative AI Start-up now requires a Python engineer to build innovative data solutions which power smarter and more relevant customer experiences. You will develop and design datapipelines, data models, backend APIs and an array of customer facing data services. The role: Own and improve our clients data infrastructure … develop dashboards to deliver insights. Shape the direction of our clients architecture, scalability and security. Key Requirements: Strong hands-on Python experience with a solid understanding of modern data tools. Extensive experience working with both SQL and NoSQL databases. Exposure to cloud services, infrastructure as code and CI/CD pipelines. Strong understanding of data modelling More ❯
Innovative AI Start-up now requires a Python engineer to build innovative data solutions which power smarter and more relevant customer experiences. You will develop and design datapipelines, data models, backend APIs and an array of customer facing data services. The role: Own and improve our clients data infrastructure … develop dashboards to deliver insights. Shape the direction of our clients architecture, scalability and security. Key Requirements: Strong hands-on Python experience with a solid understanding of modern data tools. Extensive experience working with both SQL and NoSQL databases. Exposure to cloud services, infrastructure as code and CI/CD pipelines. Strong understanding of data modelling More ❯
process, with the founders having had first hand experience of the lengthiness and challenges of deal processes in the past. Their stack spans across back end, front end, data and crucially AI. This is a great opportunity for an entrepreneurial software engineer who wants to play a part in shaping the technical vision of this business and work … their product from an early stage. What you'll work on: Backend APIs (Python/FastAPI): Build and maintain secure, high-performance services that drive AI features and data access at scale. RAG & vector search: Design and improve retrieval pipelines (embeddings, chunking, hybrid search, ranking, feedback loops), owning schema design, latency, and relevance across vector databases. LLM integration … Connect and orchestrate large language models (OpenAI, Bedrock, etc.), manage prompts, tools, safeguards, and evaluation. Datapipelines: Ingest, clean, and transform structured and unstructured data; design efficient schemas (Postgres/NoSQL) for search and analytics. Frontend (React/Next.js): Deliver user-friendly, performant UIs that make AI-powered features (search, filters, explanations, citations) clear and accessible. More ❯
process, with the founders having had first hand experience of the lengthiness and challenges of deal processes in the past. Their stack spans across back end, front end, data and crucially AI. This is a great opportunity for an entrepreneurial software engineer who wants to play a part in shaping the technical vision of this business and work … their product from an early stage. What you'll work on: Backend APIs (Python/FastAPI): Build and maintain secure, high-performance services that drive AI features and data access at scale. RAG & vector search: Design and improve retrieval pipelines (embeddings, chunking, hybrid search, ranking, feedback loops), owning schema design, latency, and relevance across vector databases. LLM integration … Connect and orchestrate large language models (OpenAI, Bedrock, etc.), manage prompts, tools, safeguards, and evaluation. Datapipelines: Ingest, clean, and transform structured and unstructured data; design efficient schemas (Postgres/NoSQL) for search and analytics. Frontend (React/Next.js): Deliver user-friendly, performant UIs that make AI-powered features (search, filters, explanations, citations) clear and accessible. More ❯