Fabric and Databricks Data Engineer - Outside IR35 - Hybrid
Fabric and Databricks Data Engineer - Outside IR35 - Hybrid
Role Overview
We're looking for a skilled Fabric & Databricks Engineer to design, build, and maintain scalable analytics and data engineering solutions. You'll work at the core of our data platform, enabling analytics, reporting, and advanced data use cases by leveraging Microsoft Fabric and Databricks.
You'll collaborate closely with data analysts, data scientists, and stakeholders to deliver reliable, performant, and secure data pipelines and models.
Key Responsibilities
Design, develop, and maintain end-to-end data pipelines using Microsoft Fabric and Databricks
Build and optimize Lakehouse architectures using Delta Lake principles
Ingest, transform, and curate data from multiple sources (APIs, databases, files, streaming)
Develop scalable data transformations using PySpark and Spark SQL
Implement data models optimized for analytics and reporting (e.g. star schemas)
Monitor, troubleshoot, and optimize performance and cost of data workloads
Apply data quality, validation, and governance best practices
Collaborate with analysts and BI teams to enable self-service analytics
Contribute to CI/CD pipelines and infrastructure-as-code for data platforms
Ensure security, access controls, and compliance across the data estate
Document solutions and promote engineering best practices
Required Skills & Experience
Strong experience with Microsoft Fabric (Lakehouse, Pipelines, Notebooks, Dataflows, OneLake)
Hands-on experience with Databricks in production environments
Proficiency in PySpark and SQL
Solid understanding of data engineering concepts (ETL/ELT, orchestration, partitioning)
Experience working with Delta Lake
Familiarity with cloud platforms (Azure preferred)
Experience integrating data from relational and non-relational sources
Knowledge of data modeling for analytics
Experience with version control (Git) and collaborative development workflows
Nice to Have
Experience with Power BI and semantic models
Exposure to streaming technologies (Kafka, Event Hubs, Spark Structured Streaming)
Infrastructure-as-code experience (Bicep, Terraform)
CI/CD tooling (Azure DevOps, GitHub Actions)
Familiarity with data governance and cataloging tools
Experience supporting ML or advanced analytics workloads
What We're Looking For
Strong problem-solving and analytical mindset
Ability to work independently and as part of a cross-functional team
Clear communication skills and stakeholder awareness
Passion for building reliable, scalable data platforms
To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed).
Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.