Data Engineer
🚀 Data Engineer | Azure | Databricks | Lakehouse
We’re looking for a Data Engineer to join a high-performing data function, playing a key role in building and scaling a modern Azure-based data platform.
This is an opportunity to work on a Databricks lakehouse environment, delivering robust, scalable pipelines that power critical analytics and business insights.
💡 The Role
You’ll be responsible for designing, building, and maintaining end-to-end data pipelines, taking data from source systems through to curated datasets ready for reporting and analytics.
Working closely with architecture and delivery teams, you’ll help shape a high-quality, governed, and observable data platform.
🔧 What You’ll Be Doing
- Build and maintain Azure & Databricks pipelines
- Develop scalable ELT processes using PySpark
- Implement data quality, lineage, and security controls
- Own CI/CD pipelines and Infrastructure as Code (Terraform)
- Ensure pipelines are testable, observable, and easy to troubleshoot
- Optimise data services for performance and cost efficiency
- Collaborate with stakeholders to translate requirements into data solutions
- Contribute to Agile delivery with clear documentation and best practices
🧠What We’re Looking For
- Strong experience with Azure Data Platform & Databricks
- Proven track record building scalable data pipelines
- Hands-on with PySpark / Spark-based processing
- Experience with Terraform & CI/CD pipelines
- Solid understanding of data governance, quality, and lineage
- Familiarity with Git-based workflows and DevOps practices
- Strong communication skills and ability to work with cross-functional teams
🌟 Why Apply?
- Work on a modern lakehouse architecture
- Be part of a forward-thinking data team
- Influence engineering standards and platform design
- Deliver impactful data solutions at scale