warrington, cheshire, north west england, United Kingdom
Harnham
team of 45 people, including Data Scientists, ML Engineers and 2 Data Engineers. Day‐to‐day you will: Monitor, optimise and rebuild ETL/DeltaLake workflows in Databricks. Migrate legacy ingestion jobs to modern, cloud‐native patterns (Azure preferred, some AWS/GCP). Collaborate with scientists More ❯
team of 45 people, including Data Scientists, ML Engineers and 2 Data Engineers. Day‐to‐day you will: Monitor, optimise and rebuild ETL/DeltaLake workflows in Databricks. Migrate legacy ingestion jobs to modern, cloud‐native patterns (Azure preferred, some AWS/GCP). Collaborate with scientists More ❯
quality, and performance Utilise Azure Databricks and adhere to code-based deployment practices Essential Skills: Over 3 years of experience with Databricks (including Lakehouse, DeltaLake, PySpark, Spark SQL) Strong proficiency in SQL with 5+ years of experience Extensive experience with Azure Data Factory Proficiency in Python programming More ❯
SQL and Python Prior experience designing solutions on the Databricks Data Intelligence platform, either on Azure or AWS Good knowledge of Databricks components including DeltaLake, Unity Catalogue, ML Flow etc. Experience building data pipelines and ETL processes Experience with any of the following is highly desirable: Snowflake, Kafka, Azure Data More ❯
SQL and Python Prior experience designing solutions on the Databricks Data Intelligence platform, either on Azure or AWS Good knowledge of Databricks components including DeltaLake, Unity Catalogue, ML Flow etc. Experience building data pipelines and ETL processes Experience with any of the following is highly desirable: Snowflake, Kafka, Azure Data More ❯
and building end-to-end data pipelines. Proficient in Python and/or Scala; solid understanding of SQL and distributed computing principles. Experience with DeltaLake, Lakehouse architecture, and data governance frameworks. Excellent client-facing and communication skills. Experience in Azure Data Services is desirable (e.g. Azure Data … Lake, Synapse, Data Factory, Fabric). More ❯
and building end-to-end data pipelines. Proficient in Python and/or Scala; solid understanding of SQL and distributed computing principles. Experience with DeltaLake, Lakehouse architecture, and data governance frameworks. Excellent client-facing and communication skills. Experience in Azure Data Services is desirable (e.g. Azure Data … Lake, Synapse, Data Factory, Fabric). More ❯
bolton, greater manchester, north west england, United Kingdom
Accelero
and building end-to-end data pipelines. Proficient in Python and/or Scala; solid understanding of SQL and distributed computing principles. Experience with DeltaLake, Lakehouse architecture, and data governance frameworks. Excellent client-facing and communication skills. Experience in Azure Data Services is desirable (e.g. Azure Data … Lake, Synapse, Data Factory, Fabric). More ❯
warrington, cheshire, north west england, United Kingdom
Accelero
and building end-to-end data pipelines. Proficient in Python and/or Scala; solid understanding of SQL and distributed computing principles. Experience with DeltaLake, Lakehouse architecture, and data governance frameworks. Excellent client-facing and communication skills. Experience in Azure Data Services is desirable (e.g. Azure Data … Lake, Synapse, Data Factory, Fabric). More ❯
with 3+ years leading Databricks-based solutions. Proven experience in a consulting environment delivering large-scale data platform projects. Hands-on expertise in Spark, DeltaLake, MLflow, Unity Catalog, and DBSQL. Strong proficiency in Python, SQL, and at least one major cloud platform (AWS, Azure, or GCP). More ❯
with 3+ years leading Databricks-based solutions. Proven experience in a consulting environment delivering large-scale data platform projects. Hands-on expertise in Spark, DeltaLake, MLflow, Unity Catalog, and DBSQL. Strong proficiency in Python, SQL, and at least one major cloud platform (AWS, Azure, or GCP). More ❯
with 3+ years leading Databricks-based solutions. Proven experience in a consulting environment delivering large-scale data platform projects. Hands-on expertise in Spark, DeltaLake, MLflow, Unity Catalog, and DBSQL. Strong proficiency in Python, SQL, and at least one major cloud platform (AWS, Azure, or GCP). More ❯
with 3+ years leading Databricks-based solutions. Proven experience in a consulting environment delivering large-scale data platform projects. Hands-on expertise in Spark, DeltaLake, MLflow, Unity Catalog, and DBSQL. Strong proficiency in Python, SQL, and at least one major cloud platform (AWS, Azure, or GCP). More ❯
and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake, and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. More ❯
and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits More ❯
Coventry, Warwickshire, United Kingdom Hybrid / WFH Options
Cadent Gas
in SAP Datasphere or SAP BW4/Hana Advanced skills in SQL, data modelling, and data transformation Familiarity with Databricks, Apache Spark, PySpark, and DeltaLake Agile mindset with experience in DevOps and iterative delivery Excellent communication and stakeholder engagement abilities At Cadent, we're thrilled to be More ❯
implementation experience with Microsoft Fabric (preferred), along with familiarity with Azure Synapse and Databricks. Experience in core data platform technologies and methods including Spark, DeltaLake, Medallion Architecture, pipelines, etc. Experience leading medium to large-scale cloud data platform implementations, guiding teams through technical challenges and ensuring alignment More ❯
implementation experience with Microsoft Fabric (preferred), along with familiarity with Azure Synapse and Databricks. Experience in core data platform technologies and methods including Spark, DeltaLake, Medallion Architecture, pipelines, etc. Experience leading medium to large-scale cloud data platform implementations, guiding teams through technical challenges and ensuring alignment More ❯
expert proven to deliver fast-paced releases. You will have worked with the latest Azure data platform technologies, particularly Azure Data Factory, Azure Data Lake Storage and Azure Databricks. Hands on experience of working in Databricks. In particular design and usage of the DeltaLake storage format. More ❯
expert proven to deliver fast-paced releases. You will have worked with the latest Azure data platform technologies, particularly Azure Data Factory, Azure Data Lake Storage and Azure Databricks. Hands on experience of working in Databricks. In particular design and usage of the DeltaLake storage format. More ❯
and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. More ❯
that powers high-impact analytics and machine learning solutions. Key Responsibilities Engineer and maintain modern data platforms with a strong focus on Databricks, including DeltaLake, Apache Spark, and MLflow Build and optimise CI/CD pipelines, infrastructure-as-code (IaC), and cloud integrations (Azure preferred, AWS/ More ❯
to define, develop, and deliver impactful data products to both internal stakeholders and end customers. Responsibilities Design and implement scalable data pipelines using Databricks, DeltaLake, and Lakehouse architecture Build and maintain a customer-facing analytics layer, integrating with tools like PowerBI, Tableau, or Metabase Optimise ETL processes More ❯
for you to demonstrate include: Desire to expand your cloud/platform engineering capabilities Experience working with Big Data Experience of data storage technologies: DeltaLake, Iceberg, Hudi Knowledge and understanding of Apache Spark, Databricks or Hadoop Ability to take business requirements and translate these into tech specifications More ❯
for you to demonstrate include: Desire to expand your cloud/platform engineering capabilities Experience working with Big Data Experience of data storage technologies: DeltaLake, Iceberg, Hudi Knowledge and understanding of Apache Spark, Databricks or Hadoop Ability to take business requirements and translate these into tech specifications More ❯