Data Engineer
Are you a Data Engineer wanting to join a business early on in their new data platform launch?
You’ll have the genuine ability to make a real impact at this Housing Association with a clear route to lead and architectural roles. If you’re a Data Engineer with some good technical grounding and now wanting to grow and learn, then this would be an excellent opportunity for you
This is a hands-on data engineering role where you’ll be building and shaping how data works across the organisation.
As Data Engineer, you’ll spend your time designing and developing robust pipelines across Azure (ADF, Synapse, Fabric), pulling data from multiple systems and turning it into something joined-up, usable, and genuinely valuable. Alongside that, you’ll be working in Databricks (PySpark / SQL) to curate and refine data into well-structured datasets that the business can rely on.
You’ll be responsible for making sure the data is right by building in validation, reconciliation, and quality checks as standard, so what’s delivered is trusted from day one. You’ll also take real ownership of how data flows across the platform, thinking about performance, reliability, and cost as part of everything you build.
The business is very proud of its engineering culture and mindset. You’ll be working with Git, code reviews, and CI/CD (Azure DevOps / GitHub) to also deliver changes across environments keeping things clean, controlled, and scalable as the platform grows.
This role sits right at the centre of how data is used, collaborating with BI analysts, engineers, and wider business teams to make sure data is accessible and actually drives decisions, not just reports, so you won’t be working in isolation.
Ultimately, this is a role for someone who wants to move beyond just building pipelines and start shaping the future state for data in a modern, cloud-based environment.
You would be an excellent fit as Data Engineer if you have:
- Designed, built, and managed ETL/ELT pipelines using data engineering cloud services using Azure Data Factory, Azure Synapse Analytics and/or Fabric pipelines
- Designed and orchestrated data curation pipelines/jobs with Databricks (PySpark/SQL).
- Handled Git with code review and branch protection and used CI/CD (Azure DevOps or GitHub Actions)
This is a hybrid role with one day a week in the office in Milton Keynes with flexibility.
The salary is £50,000 with some flexibility depending on experience, alongside a good benefits package.
Please apply to this advert, reach out to me on LinkedIn, or contact me at james@recruitwithpurpose.co.uk to learn more.
If you don’t have an updated CV, no problem, send what you have, and we’ll take it from there.