Glasgow, Lanarkshire, Scotland, United Kingdom Hybrid / WFH Options
Harvey Nash
database platforms Cloud Architecture: Experience with Azure data services (Azure Data Factory, Azure Synapse, Azure Data Lake) Data Governance: Understanding of data quality, data lineage, and metadata management principles. ETL/ELT Processes: Experience designing and implementing data integration workflows. Business Intelligence: Knowledge of reporting and analytics platforms (Power BI, SSRS, or similar) Data Warehousing: Experience with dimensional modelling andMore ❯
Bonus if you have – Proficiency in SQL Server, Azure SQL Database and other enterprise database platforms Experience with Azure data services (Azure Data Factory, Azure Synapse, Azure Data Lake) ETL/ELT: experience designing and implementing data integration workflows Knowledge of reporting and analytics platforms (Power BI, SSRS, or similar) Experience with dimensional modelling and data warehouse architecture patterns API More ❯
controls, and compliance in data management. Technical Skills: Hands-on experience with Python, PySpark, and SQL . Experience with AWS (preferred). Knowledge of data warehousing (DW) concepts andETL processes. Familiarity with DevOps principles and secure coding practices. Experience: Proven track record in data engineering, data governance, and large-scale data systems. Experience working on change and transformation projects. More ❯
paced environment, tackling a variety of projects while taking a consultative approach to exciting new challenges. Key skills required: Strong experience in API Integration & Automation Proficiency in Data Engineering & ETL Understanding of Security Tooling Familiarity Expertise in Power BI Experience with Cloud & Storage Knowledge (Azure preferred) Good understanding of Data Governance & Security Experience with Infrastructure as Code (IaC) The role More ❯
management solutions, including monitoring, log-collection, correlation, and analytics dashboards. Implement application performance monitoring across large-scale distributed systems. Collaborate with cross-functional teams to design scalable and reliable ETL processes using Python and Databricks. Develop and deploy ETL jobs, extracting and transforming data from multiple sources. Take ownership of the full engineering lifecycle: data extraction, cleansing, transformation, and loading. More ❯