London, England, United Kingdom Hybrid / WFH Options
Axis Capital
and Lakehouse architectures to ensure our data solutions are secure, efficient, and optimized. Key Responsibilities: Design and implement data solutions using Azure services, including Azure Databricks, ADF, and Data Lake Storage. Develop and maintain ETL/ELT pipelines to process structured and unstructured data from multiple sources. - Automate loads using Databricks workflows and Jobs Develop, test and build CI … modeling, warehousing, and real-time streaming. Knowledge of developing and processing full and incremental loads. Experience of automated loads using Databricks workflows and Jobs Expertise in Azure Databricks, including DeltaLake, Spark optimizations, and MLflow. Strong experience with Azure Data Factory (ADF) for data integration and orchestration. Hands-on experience with Azure DevOps, including pipelines, repos, and infrastructure More ❯
and Lakehouse architectures to ensure our data solutions are secure, efficient, and optimized. Key Responsibilities: Design and implement data solutions using Azure services, including Azure Databricks, ADF, and Data Lake Storage. Develop and maintain ETL/ELT pipelines to process structured and unstructured data from multiple sources. - Automate loads using Databricks workflows and Jobs Develop, test and build CI … modeling, warehousing, and real-time streaming. Knowledge of developing and processing full and incremental loads. Experience of automated loads using Databricks workflows and Jobs Expertise in Azure Databricks, including DeltaLake, Spark optimizations, and MLflow. Strong experience with Azure Data Factory (ADF) for data integration and orchestration. Hands-on experience with Azure DevOps, including pipelines, repos, and infrastructure More ❯
a phased roadmap for the Databricks Lakehouse Platform. Architectural Design: Lead the end-to-end design of the Databricks Lakehouse architecture (Medallion architecture), including data ingestion patterns, storage layers (DeltaLake), processing frameworks (Spark), and consumption mechanisms. Technology Selection: Evaluate and recommend optimal Databricks features and integrations (e.g., Unity Catalog, Photon, Delta Live Tables, MLflow) and complementary … cloud services (e.g., Azure Data Factory, Azure Data Lake Storage, Power BI). Security & Governance Frameworks: Design robust data governance, security, and access control models within the Databricks ecosystem, ensuring compliance with industry standards and regulations. Phase 2: Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines … transfer sessions to ensure long-term sustainability of the platform. Required Skills & Experience Proven Databricks Expertise: Deep, hands-on experience designing and implementing solutions on the Databricks Lakehouse Platform (DeltaLake, Unity Catalog, Spark, Databricks SQL Analytics). Cloud Data Architecture: Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Osmii
a phased roadmap for the Databricks Lakehouse Platform. Architectural Design: Lead the end-to-end design of the Databricks Lakehouse architecture (Medallion architecture), including data ingestion patterns, storage layers (DeltaLake), processing frameworks (Spark), and consumption mechanisms. Technology Selection: Evaluate and recommend optimal Databricks features and integrations (e.g., Unity Catalog, Photon, Delta Live Tables, MLflow) and complementary … cloud services (e.g., Azure Data Factory, Azure Data Lake Storage, Power BI). Security & Governance Frameworks: Design robust data governance, security, and access control models within the Databricks ecosystem, ensuring compliance with industry standards and regulations. Phase 2: Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines … transfer sessions to ensure long-term sustainability of the platform. Required Skills & Experience Proven Databricks Expertise: Deep, hands-on experience designing and implementing solutions on the Databricks Lakehouse Platform (DeltaLake, Unity Catalog, Spark, Databricks SQL Analytics). Cloud Data Architecture: Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Osmii
a phased roadmap for the Databricks Lakehouse Platform. Architectural Design: Lead the end-to-end design of the Databricks Lakehouse architecture (Medallion architecture), including data ingestion patterns, storage layers (DeltaLake), processing frameworks (Spark), and consumption mechanisms. Technology Selection: Evaluate and recommend optimal Databricks features and integrations (e.g., Unity Catalog, Photon, Delta Live Tables, MLflow) and complementary … cloud services (e.g., Azure Data Factory, Azure Data Lake Storage, Power BI). Security & Governance Frameworks: Design robust data governance, security, and access control models within the Databricks ecosystem, ensuring compliance with industry standards and regulations. Phase 2: Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines … transfer sessions to ensure long-term sustainability of the platform. Required Skills & Experience Proven Databricks Expertise: Deep, hands-on experience designing and implementing solutions on the Databricks Lakehouse Platform (DeltaLake, Unity Catalog, Spark, Databricks SQL Analytics). Cloud Data Architecture: Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse More ❯
Azure and cloud computing concepts Apache Spark – Databricks, Microsoft Fabric, or other Spark engines Python SQL – complex high-performance queries Azure Data Factory or other orchestration tools Azure Data Lake storage and DeltaLake Unity Catalog, Purview, or other Data Governance tools IAC tools like Terraform, Azure DevOps Preferred but not essential: Visualization Tools – Power BI, Tableau More ❯
of our solutions in Python SQL – writing complex high-performance queries, understanding how queries execute on different compute technologies Azure Data Factory or other cloud orchestration tools Azure Data Lake storage and DeltaLake Unity Catalog, Purview or other Data Governance tools IAC tool such as Terraform, Azure DevOps Other technology we like, but isn’t essential More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
building and deploying modern Azure Databricks based data solutions, enabling the business to make faster, data-driven decisions. You’ll work hands-on with Azure Databricks, Azure Data Factory, DeltaLake, and Power BI to design scalable data pipelines, implement efficient data models, and ensure high-quality data delivery. This is a fantastic opportunity to shape the future … Maritime while working with cutting-edge cloud technologies. Key responsibilities and primary deliverables Design, develop, and optimize end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake. Implement Medallion Architecture to structure raw, enriched, and curated data layers efficiently. Build scalable ETL/ELT processes with Azure Data Factory and PySpark. Work with Data Architecture … pipelines. Collaborate with analysts to validate and refine datasets for reporting. Apply DevOps & CI/CD best practices (Git, Azure DevOps) for automated testing and deployment. Optimize Spark jobs, DeltaLake tables, and SQL queries for performance and cost efficiency. Troubleshoot and resolve data pipeline issues proactively. Partner with Data Architects, Analysts, and Business Teams to deliver end More ❯
London, England, United Kingdom Hybrid / WFH Options
Osmii
a phased roadmap for the Databricks Lakehouse Platform. Architectural Design: Lead the end-to-end design of the Databricks Lakehouse architecture (Medallion architecture), including data ingestion patterns, storage layers (DeltaLake), processing frameworks (Spark), and consumption mechanisms. Technology Selection: Evaluate and recommend optimal Databricks features and integrations (e.g., Unity Catalog, Photon, Delta Live Tables, MLflow) and complementary … cloud services (e.g., Azure Data Factory, Azure Data Lake Storage, Power BI). Security & Governance Frameworks: Design robust data governance, security, and access control models within the Databricks ecosystem, ensuring compliance with industry standards and regulations. Phase 2: Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines … transfer sessions to ensure long-term sustainability of the platform. Required Skills & Experience Proven Databricks Expertise: Deep, hands-on experience designing and implementing solutions on the Databricks Lakehouse Platform (DeltaLake, Unity Catalog, Spark, Databricks SQL Analytics). Cloud Data Architecture: Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse More ❯
building and deploying modern Azure Databricks based data solutions, enabling the business to make faster, data-driven decisions. You’ll work hands-on with Azure Databricks, Azure Data Factory, DeltaLake, and Power BI to design scalable data pipelines, implement efficient data models, and ensure high-quality data delivery. This is a fantastic opportunity to shape the future … Maritime while working with cutting-edge cloud technologies. Key responsibilities and primary deliverables Design, develop, and optimize end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake. Implement Medallion Architecture to structure raw, enriched, and curated data layers efficiently. Build scalable ETL/ELT processes with Azure Data Factory and PySpark. Work with Data Architecture … pipelines. Collaborate with analysts to validate and refine datasets for reporting. Apply DevOps & CI/CD best practices (Git, Azure DevOps) for automated testing and deployment. Optimize Spark jobs, DeltaLake tables, and SQL queries for performance and cost efficiency. Troubleshoot and resolve data pipeline issues proactively. Partner with Data Architects, Analysts, and Business Teams to deliver end More ❯
building and deploying modern Azure Databricks based data solutions, enabling the business to make faster, data-driven decisions. You’ll work hands-on with Azure Databricks, Azure Data Factory, DeltaLake, and Power BI to design scalable data pipelines, implement efficient data models, and ensure high-quality data delivery. This is a fantastic opportunity to shape the future … Maritime while working with cutting-edge cloud technologies. Key responsibilities and primary deliverables Design, develop, and optimize end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake. Implement Medallion Architecture to structure raw, enriched, and curated data layers efficiently. Build scalable ETL/ELT processes with Azure Data Factory and PySpark. Work with Data Architecture … pipelines. Collaborate with analysts to validate and refine datasets for reporting. Apply DevOps & CI/CD best practices (Git, Azure DevOps) for automated testing and deployment. Optimize Spark jobs, DeltaLake tables, and SQL queries for performance and cost efficiency. Troubleshoot and resolve data pipeline issues proactively. Partner with Data Architects, Analysts, and Business Teams to deliver end More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Somerset Bridge
data infrastructure, ensuring compliance with FCA/PRA regulations, and enabling AI-driven analytics and automation. By leveraging Azure-native services, such as Azure Data Factory (ADF) for orchestration, DeltaLake for ACID-compliant data storage, and Databricks Structured Streaming for real-time data processing, the Data Engineer will help unlock insights, enhance pricing accuracy, and drive innovation. … governance, and automation within Azure’s modern data platform. Key Responsibilities Data Pipeline Development – Design, build, and maintain scalable ELT pipelines using Azure Databricks, Azure Data Factory (ADF), and DeltaLake to automate real-time and batch data ingestion. Cloud Data Engineering – Develop and optimise data solutions within Azure, ensuring efficiency, cost-effectiveness, and scalability, leveraging Azure Synapse … Analytics, ADLS Gen2, and Databricks Workflows Data Modelling & Architecture – Implement robust data models to support analytics, reporting, and machine learning, using DeltaLake and Azure Synapse. Automation & Observability – Use Databricks Workflows, dbt, and Azure Monitor to manage transformations, monitor query execution, and implement data reliability checks. Data Quality & Governance – Ensure data integrity, accuracy, and compliance with industry regulations More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
Aventum Group
presentation skills with the ability to influence others Skills and Abilities Platforms & Tools Cloud Computing platforms (ADLS Gen2), Microsoft Stack (Synapse, DataBricks, Fabric, Profisee), Azure Service Bus, Power BI, DeltaLake, Azure DevOps, Azure Monitor, Azure Data Factory, SQL Server, Azure DataLake Storage, Azure App service, Azure ML is a plus Languages: Python, SQL, T-SQL, SSIS DB: Azure SQL Database More ❯
fast-growing organisation. Key Responsibilities: Design, develop, and maintain scalable data pipelines using SQL and Python (PySpark) . Ingest, transform, and curate data from multiple sources into Azure Data Lake and DeltaLake formats. Build and optimize datasets for performance and reliability in Azure Databricks . Collaborate with analysts and business stakeholders to translate data requirements into … Skills & Experience: Strong proficiency in SQL for data transformation and performance tuning. Solid experience with Python , ideally using PySpark in Azure Databricks . Hands-on experience with Azure Data Lake Storage Gen2 . Understanding of data warehouse concepts , dimensional modelling , and data architecture . Experience working with DeltaLake and large-scale data processing. Experience building ETL More ❯
s data engineering capabilities as they scale their team and client base. Key Responsibilities: Architect and implement end-to-end, scalable data and AI solutions using the Databricks Lakehouse (DeltaLake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the adoption of Lakehouse architecture (bronze More ❯
s data engineering capabilities as they scale their team and client base. Key Responsibilities: Architect and implement end-to-end, scalable data and AI solutions using the Databricks Lakehouse (DeltaLake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the adoption of Lakehouse architecture (bronze More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and DeltaLake optimisation. Experience with ETL/ELT processes for integrating diverse data sources. Experience in gathering, documenting, and refining requirements from key business stakeholders to align BI solutions More ❯
Ibstock, England, United Kingdom Hybrid / WFH Options
Ibstock Plc
expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and DeltaLake optimisation. Experience with ETL/ELT processes for integrating diverse data sources. Experience in gathering, documenting, and refining requirements from key business stakeholders to align BI solutions More ❯
principles, including data modeling, data warehousing, data integration, and data governance. Databricks Expertise: They have hands-on experience with the Databricks platform, including its various components such as Spark, DeltaLake, MLflow, and Databricks SQL. They are proficient in using Databricks for various data engineering and data science tasks. Cloud Platform Proficiency: They are familiar with cloud platforms … integration patterns. Extensive experience with big data technologies and cloud computing, specifically Azure (minimum 3+ years hands-on experience with Azure data services). Strong experience with Azure Databricks, DeltaLake, and other relevant Azure services. Active Azure Certifications: At least one of the following is required: Microsoft Certified: Azure Data Engineer Associate Microsoft Certified: Azure Data Scientist More ❯
to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing technologies like Databricks, Spark, DeltaLake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the … to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing technologies like Databricks, Spark, DeltaLake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the More ❯
Derby, England, United Kingdom Hybrid / WFH Options
Cooper Parry
Data Factory, Azure Functions) Experience integrating and transforming data from various sources such as Legacy Systems, Restful APIs, Flat files Knowledge of modern data architectures: Data warehouse, Lakehouse, Data Lake Hands-on experience with Power BI, semantic modelling, and DAX Strong SQL and data manipulation skills. Exposure to Python and PySpark is required. Experience working with open data formats … like DeltaLake, Parquet, Json, Csv. Familiarity with CI/CD pipelines, version control (e.g., Git), and deployment automation tools Bonus points if you have: Exposure to MuleSoft or other API integration tools Experience in data governance, data cataloguing, and privacy/security in cloud data environments Understanding of business domains such as professional services or financial services More ❯
London, England, United Kingdom Hybrid / WFH Options
DATAPAO
efforts. What does it take to fit the bill? Technical Expertise 5+ years in Data Engineering , focusing on cloud platforms (AWS, Azure, GCP); Proven experience with Databricks (PySpark, SQL, DeltaLake, Unity Catalog); Extensive ETL/ELT and data pipeline orchestration experience (e.g., Databricks Workflows, DLT, Airflow, ADF, Glue, Step Functions); Proficiency in SQL and Python for data More ❯