problems for their immediate team and across multiple teams. Ensuring data privacy and security in all data-driven interfaces. Set yourself apart: Good knowledge of SQL/Python/PySpark development. Proven understanding of Azure cloud principles & Azure DevOps. Understanding of SQL optimization best practices. Understanding of data quality principles and best practices. What’s in it for you More ❯
3 Arena Central, Bridge Street, Birmingham, England
Homes England
organisation To deliver this you will have opportunities to learn and utilise tools and technologies such as Azure Databricks, Computer Vision and Azure Machine Learning alongside languages such as PySpark, Python, R and SQL As a Data Scientist you will also have opportunities to work alongside the engineering team developing our Azure Data Platform, and Data Analysts working to More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
TXP
of the project. Role Responsibilities - you will: Focus on the Silver layer. Build ETL pipelines using Microsoft Fabric Data Factory. Perform data transformations, schema design, standardisation, and cleansing. Write PySpark notebooks and stored procedures with error tracking. Possibly assist with data dictionary and mapping logic in collaboration with the Data Warehouse Manager. Validate data accuracy by comparing outputs to … ERP source data. Skills and experience required: Microsoft Fabric (preferred), or Azure Synapse/Data Factory experience. PySpark, SQL, PL/SQL, stored procedures, error handling. Experience with ERP systems (e.g., JDE, SAP, Dynamics) is a nice-to-have. Familiarity with Oracle or DB2 is beneficial. Fabric certification is helpful but not essential-practical experience is more valued. Be More ❯