Data Architect
Data Architect
x2 days per month in the Surrey HQ (hybrid with strong flexibility)
This organisation is undergoing a continued large-scale transformation, placing modern data strategy at the heart of its evolution. With a growing portfolio of advanced data products and AI initiatives, they are investing in scalable cloud-native architecture that enables powerful analytics and intelligent services across the entire business.
This is an opportunity to play a central role in shaping their long-term data ecosystem, with hands-on ownership of model design, platform optimisation, and data governance standards within an extremely established data science, analytics engineering, ai engineering, and data platform engineering team.
What You'll Be Doing
- Design scalable data platforms using modern architecture principles across development, staging, and production environments
- Lead the modelling of business-critical domains into dimensional structures that support BI, regulatory reporting, and ML pipelines
- Integrate data from diverse internal and external sources, including cloud services, APIs, and third-party systems
- Define and maintain semantic layers using tools such as DBT and Delta Live Tables, ensuring consistency across dashboards and analytics tools
- Enable data products that support AI and GenAI use cases, including vector-ready and model-optimised datasets
- Develop and maintain secure data access controls, including RBAC, token policies, and anonymisation mechanisms
- Support batch and real-time data flows using tools like Airflow, Kafka, Spark, and Terraform
- Monitor cloud platform performance and implement cost-control measures while improving reliability
- Collaborate with product, engineering, and governance teams to define standards, lead architectural discussions, and contribute to strategic decisions
- Write and review architectural documentation and technical design guidance
What They're Looking For
- Strong experience in data architecture, including designing data models from scratch and implementing schemas like star, snowflake, and canonical enterprise structures
- A track record of working across domains such as pricing, claims, fraud, policy, and quote data
- High proficiency in SQL and cloud-native tools such as Databricks, Snowflake, and AWS environments
- Hands-on experience delivering semantic layers using DBT and building analytics-friendly data structures
- Experience working with streaming and batch pipelines and exposure to the tools that support them
- Confidence designing infrastructure to support modern data science workflows, including GenAI, RAG pipelines, and ML inference
- Good understanding of data privacy, retention, and access control frameworks in regulated environments
- A collaborative mindset, with experience working across business and technical teams to deliver scalable, reusable data components
If this role interests you and you would like to find out more (or find out about other roles), please apply here or contact us via niall.wharton@Xcede.com (feel free to include a CV for review).