*Please note that this role requires security clearance due to the nature of the project
Essential Skills & Experience
10+ years' experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions.
Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications.
Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures.
Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the Azure ecosystem.
Demonstrable experience designing, developing, and maintaining large-scale datasets and complex data pipelines in cloud environments.
Proven capability in data architecture design, including the development and optimisation of end-to-end data pipelines for performance, reliability, and scalability.
Expert-level knowledge of Databricks, including hands-on implementation, cluster management, performance tuning, and (ideally) relevant Databricks certifications.
Hands-on experience with SQL and NoSQL database technologies, with strong query optimisation skills.
Solid understanding of data quality frameworks, data governance practices, and implementing automated testing/validation within pipelines.
Proficient with version control systems such as Git, including branching strategies and CI/CD integration.
Experience working within Agile delivery environments, collaborating closely with cross-functional teams to deliver iterative, high-quality solutions.