Data Engineer
Role: Data Engineer (Python, PySpark, SQL)
Day rate: £475pd-£520pd (Inside IR35)
Contract: 6 months initial
We are currently recruiting for a Data Engineer to be part of a team on a Business Data Service Project, which is a Data Warehouse Replacement & Report simplification project. You will be responsible for ensuring all data products and solutions created in the business insights ecosystem are fit for purpose, resilient, robust and reliable.
You will play a pivotal role that builds, tests and deploys Data Warehouse solutions. This will cover the lifecycle for the planning, ingestion, transformation, consolidation and aggregation of data from source to target in the Data Warehouse environment.
Skills and experience required:
- Strong experience developing ETL/ELT pipelines using PySpark and Python
- Hands-on experience with Microsoft Fabric lakehouse or similar cloud data platforms (Azure Synapse Analytics, Databricks)
- Proficiency in working with Jupyter/Fabric Notebooks for data engineering workflows
- Solid understanding of data lakehouse architecture patterns and medallion architecture
- Experience working with Delta Lake or similar lakehouse storage formats
- Strong SQL skills for data manipulation, transformation, and quality validation
This is a role that will require 2/3 days per month onsite in Dudley, West Midlands. Please consider this when applying for the role.
If you are interested in the role and would like to apply, please click on the link for immediate consideration.