enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need More ❯
in Business Intelligence, with 5+ years in a BI leadership role in a global or matrixed organisation. Proven expertise in modern BI architecture (Data Lake, EDW, Streaming, APIs, Real-Time & Batch Processing). Demonstrated experience delivering cloud-based analytics platforms (Azure, AWS, GCP). Strong knowledge of data governance … BI will work within a modern, cloud-based BI ecosystem, including: Data Integration: Fivetran, HVR, Databricks, Apache Kafka, Google BigQuery, Google Analytics 4 Data Lake & Storage: Databricks DeltaLake, Amazon S3 Data Transformation: dbt Cloud Data Warehouse: Snowflake Analytics & Reporting: Power BI, Excel, Snowflake SQL REST API More ❯
in building/architecting data analytic solutions. 3 years of experience in building data platform using Azure services including Azure Databricks, Azure Data Factory, DeltaLake, Azure Data Lake (ADLS), Power BI. Solid hands-on experience with Azure Databricks - Pyspark coding and Spark SQL coding - Must have. More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and DeltaLake optimisation. Experience with ETL/ELT processes for integrating diverse data sources. Experience in gathering, documenting, and refining requirements from key business More ❯
we do Passion for data and experience working within a data driven organization Hands-on experience with architecting, implementing, and performance tuning of: Data Lake technologies (e.g. DeltaLake, Parquet, Spark, Databricks) API & Microservices Message queues, streaming technologies, and event driven architecture NoSQL databases and query languages More ❯
they scale their team and client base. Key Responsibilities: Architect and implement end-to-end, scalable data and AI solutions using the Databricks Lakehouse (DeltaLake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the More ❯
business use cases. Strong knowledge of data governance, data warehousing, and data security principles. Hands-on experience with modern data stacks and technologies (e.g., DeltaLake, SQL, Python, Azure/AWS/GCP). Experience aligning data capabilities with commercial strategy and business performance. Exceptional communication skills. More ❯
business use cases. Strong knowledge of data governance, data warehousing, and data security principles. Hands-on experience with modern data stacks and technologies (e.g., DeltaLake, SQL, Python, Azure/AWS/GCP). Experience aligning data capabilities with commercial strategy and business performance. Exceptional communication skills. More ❯
City of London, London, United Kingdom Hybrid / WFH Options
83zero Ltd
is a MUST! Key expertise and experience we're looking for: Data Engineering in Databricks - Spark programming with Scala, Python, SQL Ideally experience with DeltaLake Databricks workflows, jobs, etc. Familiarity with Azure Data Lake: experience with data ingestion and ETL/ELT frameworks Data Governance experience More ❯
a MUST! Key expertise and experience we’re looking for: Data Engineering in Databricks – Spark programming with Scala, Python, SQL and Ideally experience with DeltaLake or Databricks workflows, jobs, etc. Familiarity with Azure Data Lake: experience with data ingestion and ETL/ELT frameworks Data Governance More ❯
a MUST! Key expertise and experience we’re looking for: Data Engineering in Databricks – Spark programming with Scala, Python, SQL and Ideally experience with DeltaLake or Databricks workflows, jobs, etc. Familiarity with Azure Data Lake: experience with data ingestion and ETL/ELT frameworks Data Governance More ❯
london, south east england, United Kingdom Hybrid / WFH Options
83zero
a MUST! Key expertise and experience we’re looking for: Data Engineering in Databricks – Spark programming with Scala, Python, SQL and Ideally experience with DeltaLake or Databricks workflows, jobs, etc. Familiarity with Azure Data Lake: experience with data ingestion and ETL/ELT frameworks Data Governance More ❯
slough, south east england, United Kingdom Hybrid / WFH Options
83zero
a MUST! Key expertise and experience we’re looking for: Data Engineering in Databricks – Spark programming with Scala, Python, SQL and Ideally experience with DeltaLake or Databricks workflows, jobs, etc. Familiarity with Azure Data Lake: experience with data ingestion and ETL/ELT frameworks Data Governance More ❯
complex data concepts to non-technical stakeholders. Preferred Skills: Experience with insurance platforms such as Guidewire, Duck Creek, or legacy PAS systems. Knowledge of DeltaLake, Apache Spark, and data pipeline orchestration tools. Exposure to Agile delivery methodologies and tools like JIRA, Confluence, or Azure DevOps. Understanding of More ❯
to define, develop, and deliver impactful data products to both internal stakeholders and end customers. Responsibilities Design and implement scalable data pipelines using Databricks, DeltaLake, and Lakehouse architecture Build and maintain a customer-facing analytics layer, integrating with tools like PowerBI, Tableau, or Metabase Optimise ETL processes More ❯
to define, develop, and deliver impactful data products to both internal stakeholders and end customers. Responsibilities Design and implement scalable data pipelines using Databricks, DeltaLake, and Lakehouse architecture Build and maintain a customer-facing analytics layer, integrating with tools like PowerBI, Tableau, or Metabase Optimise ETL processes More ❯
performance, scalability, and security. Collaborate with business stakeholders, data engineers, and analytics teams to ensure solutions are fit for purpose. Implement and optimise Databricks DeltaLake, Medallion Architecture, and Lakehouse patterns for structured and semi-structured data. Ensure best practices in Azure networking, security, and federated data access More ❯
and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and More ❯
team of 45 people, including Data Scientists, ML Engineers and 2 Data Engineers. Day‐to‐day you will: Monitor, optimise and rebuild ETL/DeltaLake workflows in Databricks. Migrate legacy ingestion jobs to modern, cloud‐native patterns (Azure preferred, some AWS/GCP). Collaborate with scientists More ❯
team of 45 people, including Data Scientists, ML Engineers and 2 Data Engineers. Day‐to‐day you will: Monitor, optimise and rebuild ETL/DeltaLake workflows in Databricks. Migrate legacy ingestion jobs to modern, cloud‐native patterns (Azure preferred, some AWS/GCP). Collaborate with scientists More ❯
team of 45 people, including Data Scientists, ML Engineers and 2 Data Engineers. Day‐to‐day you will: Monitor, optimise and rebuild ETL/DeltaLake workflows in Databricks. Migrate legacy ingestion jobs to modern, cloud‐native patterns (Azure preferred, some AWS/GCP). Collaborate with scientists More ❯
warrington, cheshire, north west england, United Kingdom
Harnham
team of 45 people, including Data Scientists, ML Engineers and 2 Data Engineers. Day‐to‐day you will: Monitor, optimise and rebuild ETL/DeltaLake workflows in Databricks. Migrate legacy ingestion jobs to modern, cloud‐native patterns (Azure preferred, some AWS/GCP). Collaborate with scientists More ❯
team of 45 people, including Data Scientists, ML Engineers and 2 Data Engineers. Day‐to‐day you will: Monitor, optimise and rebuild ETL/DeltaLake workflows in Databricks. Migrate legacy ingestion jobs to modern, cloud‐native patterns (Azure preferred, some AWS/GCP). Collaborate with scientists More ❯
quality, and performance Utilise Azure Databricks and adhere to code-based deployment practices Essential Skills: Over 3 years of experience with Databricks (including Lakehouse, DeltaLake, PySpark, Spark SQL) Strong proficiency in SQL with 5+ years of experience Extensive experience with Azure Data Factory Proficiency in Python programming More ❯