Central London, London, United Kingdom Hybrid/Remote Options
McCabe & Barton
/ELT processes to transform raw data into structured, analytics-ready formats. Optimise pipeline performance and ensure high availability of data services. Infrastructure & Architecture Architect and deploy scalable data lake solutions using Azure Data Lake Storage . Implement governance and security measures across the platform. Leverage Terraform or similar IaC tools for controlled and reproducible deployments. Databricks Development … Develop and optimise data jobs using PySpark or Scala within Databricks. Implement the medallion architecture (bronze, silver, gold layers) and use DeltaLake for reliable data transactions. Manage cluster configurations and CI/CD pipelines for Databricks deployments. Monitoring & Operations Implement monitoring solutions using Azure Monitor , Log Analytics , and Databricks tools. Optimise performance, ensure SLAs are met, and … for knowledge sharing. Essential Skills & Experience 5+ years of experience with Azure services (Azure Data Factory, ADLS, Azure SQL Database, Synapse Analytics). Strong hands-on expertise in Databricks , DeltaLake , and cluster management . Proficiency in SQL and Python for pipeline development. Familiarity with Git/GitHub and CI/CD practices. Understanding of data modelling , data More ❯
governance across modern cloud environments. Key Responsibilities Design, build, and maintain scalable data pipelines using Databricks Notebooks, Jobs, and Workflows for both batch and streaming data. Optimise Spark and DeltaLake performance through efficient cluster configuration, adaptive query execution, and caching strategies. Conduct performance testing and cluster tuning to ensure cost-efficient, high-performing workloads. Implement data quality … control policies aligned with Databricks Unity Catalogue and governance best practices. Develop PySpark applications for ETL, data transformation, and analytics, following modular and reusable design principles. Create and manage DeltaLake tables with ACID compliance, schema evolution, and time travel for versioned data management. Integrate Databricks solutions with Azure services such as Azure Data Lake Storage, Key … Vault, and Azure Functions. What We're Looking For Proven experience with Databricks, PySpark, and Delta Lake. Strong understanding of workflow orchestration, performance optimisation, and data governance. Hands-on experience with Azure cloud services. Ability to work in a fast-paced environment and deliver high-quality solutions. SC Cleared candidates If you're interested in this role, click 'apply More ❯
you love solving complex data challenges and building scalable solutions, this is your chance to make an impact. What You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards ETL & Data Modelling: T-SQL, metadata-driven pipelines DevOps: CI/CD Bonus: Python What you'll do More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
real impact. You'll work with cutting-edge technology and stay at the forefront of the data engineering field. You'll Work With Azure Data Services: Data Factory, Data Lake, SQL Databricks: Spark, DeltaLake Power BI: Advanced dashboards and analytics ETL & Data Modelling: T-SQL, metadata-driven pipelines Design and implement scalable Azure-based data solutions More ❯