teams to align operational strategies with technical and business requirements. Optimize operational performance and cost management for services including Azure Data Factory, Azure Databricks, DeltaLake, and Azure Data Lake Storage. Serve as the domain expert in DataOps by providing strategic guidance, mentoring colleagues, and driving continuous … of data workflows within the Microsoft Azure ecosystem. Hands-on expertise with Azure Data Platform components including Azure Data Factory, Azure Databricks, Azure Data Lake Storage, DeltaLake, Azure SQL, Purview and APIM. Proficiency in developing CI/CD data pipelines and strong programming skills in Python More ❯
teams to align operational strategies with technical and business requirements. Optimize operational performance and cost management for services including Azure Data Factory, Azure Databricks, DeltaLake, and Azure Data Lake Storage. Serve as the domain expert in DataOps by providing strategic guidance, mentoring colleagues, and driving continuous … of data workflows within the Microsoft Azure ecosystem. Hands-on expertise with Azure Data Platform components including Azure Data Factory, Azure Databricks, Azure Data Lake Storage, DeltaLake, Azure SQL, Purview and APIM. Proficiency in developing CI/CD data pipelines and strong programming skills in Python More ❯
bradford, yorkshire and the humber, united kingdom
Peregrine
teams to align operational strategies with technical and business requirements. Optimize operational performance and cost management for services including Azure Data Factory, Azure Databricks, DeltaLake, and Azure Data Lake Storage. Serve as the domain expert in DataOps by providing strategic guidance, mentoring colleagues, and driving continuous … of data workflows within the Microsoft Azure ecosystem. Hands-on expertise with Azure Data Platform components including Azure Data Factory, Azure Databricks, Azure Data Lake Storage, DeltaLake, Azure SQL, Purview and APIM. Proficiency in developing CI/CD data pipelines and strong programming skills in Python More ❯
customers to comprehend their business and technical needs, to develop tailored technical architectures and solutions in the Cloud, focusing on data engineering, data lakes, lake houses, business intelligence and machine learning/AI. Cost Optimization: You will be continuously trying to optimize run costs - both on platform level as … and supporting Big Data solutions for data lakes and data warehouses Expertise in cloud-based Big Data solutions is required - preferably with Azure Data Lake and related technological stack:ADLS Gen2, Spark/Databricks, DeltaLake, Kafka/Events Hub, Stream Analytics, Azure Data Factory, Azure DevOps More ❯
Monitor and troubleshoot performance issues in data pipelines. To be successful in this role you should meet the following requirements: Must have experience with DeltaLake and Lakehouse architecture. Proven experience in data engineering, working with Azure Databricks, PySpark, and SQL. Hands-on experience with Prophesy for data More ❯
team of 45 people, including Data Scientists, ML Engineers and 2 Data Engineers. Day‑to‑day you will: Monitor, optimise and rebuild ETL/DeltaLake workflows in Databricks. Migrate legacy ingestion jobs to modern, cloud‑native patterns (Azure preferred, some AWS/GCP). Collaborate with scientists More ❯
team of 45 people, including Data Scientists, ML Engineers and 2 Data Engineers. Day‑to‑day you will: Monitor, optimise and rebuild ETL/DeltaLake workflows in Databricks. Migrate legacy ingestion jobs to modern, cloud‑native patterns (Azure preferred, some AWS/GCP). Collaborate with scientists More ❯
quality, and performance Utilise Azure Databricks and adhere to code-based deployment practices Essential Skills: Over 3 years of experience with Databricks (including Lakehouse, DeltaLake, PySpark, Spark SQL) Strong proficiency in SQL with 5+ years of experience Extensive experience with Azure Data Factory Proficiency in Python programming More ❯
SQL and Python Prior experience designing solutions on the Databricks Data Intelligence platform, either on Azure or AWS Good knowledge of Databricks components including DeltaLake, Unity Catalogue, ML Flow etc. Experience building data pipelines and ETL processes Experience with any of the following is highly desirable: Snowflake, Kafka, Azure Data More ❯
and building end-to-end data pipelines. Proficient in Python and/or Scala; solid understanding of SQL and distributed computing principles. Experience with DeltaLake, Lakehouse architecture, and data governance frameworks. Excellent client-facing and communication skills. Experience in Azure Data Services is desirable (e.g. Azure Data … Lake, Synapse, Data Factory, Fabric). More ❯
and building end-to-end data pipelines. Proficient in Python and/or Scala; solid understanding of SQL and distributed computing principles. Experience with DeltaLake, Lakehouse architecture, and data governance frameworks. Excellent client-facing and communication skills. Experience in Azure Data Services is desirable (e.g. Azure Data … Lake, Synapse, Data Factory, Fabric). More ❯
and leveraging AI-powered tools to boost your productivity. Our Tech Stack Data Pipeline: Data sources Debezium/Kafka S3 Databricks/Lambda S3 (Delta format)/Embedded Reporting/Notifications Infrastructure: AWS cloud managed via Terraform. Responsibilities Develop & Deploy ML Models: Build models that power personalisation, recommendations, and … is a plus. Ability to quickly learn new tools and independently deliver scalable, high-quality solutions. Experience with data pipelines (Kafka, Debezium, S3, Lambda, DeltaLake) is advantageous. More ❯
This role is ideal for someone who enjoys blending technical precision with innovation. You’ll: Build and manage ML pipelines in Databricks using MLflow, DeltaLake, Spark, and Mosaic AI. Train and deploy generative models (LLMs, GANs, VAEs) for NLP, content generation, and synthetic data. Architect scalable solutions More ❯