Key Responsibilities Lead the design, development, and maintenance of scalable, high-performance data pipelines on Databricks. Architect and implement data ingestion, transformation, and integration workflows using PySpark, SQL, and Delta Lake. Guide the team in migrating legacy ETL processes to modern cloud-based data pipelines. Ensure data accuracy, schema consistency, row counts, and KPIs during migration and transformation. Collaborate … and analytics. ________________________________________ Required Skills & Qualifications 10-12 years of experience in data engineering, with at least 3+ years in a technical lead role. Strong expertise in Databricks , PySpark , and DeltaLake . DBT Advanced proficiency in SQL, ETL/ELT pipelines, and data modelling. Experience with Azure Data Services (ADLS, ADF, Synapse) or other major cloud platforms (AWS … of data warehousing , transformation logic , SLAs, and dependencies. Hands-on experience with real-time streaming near-realtime batch is a plus., optimisation of data bricks and DBT workload and DeltaLake Familiarity with CI/CD pipelines, DevOps practices, and Git-based workflows. Knowledge of data security, encryption, and compliance frameworks (GDPR, SOC2, ISO ).good to have Excellent More ❯
in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising end-to-end data pipelines using Azure Databricks, PySpark, ADF, and DeltaLake Implementing a medallion architecture - from raw to enriched to curated Working with DeltaLake and Spark for both batch and streaming data Collaborating with analysts More ❯
Solihull, West Midlands, England, United Kingdom Hybrid / WFH Options
MYO Talent
Data Engineer/Data Engineering/Data Consultant/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area, Permanent role, £ + car/allowance (£5,000) + 15% bonus. One of our leading clients is looking to recruit … role Salary £ + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
or more of the following Data Platforms: Microsoft: Experience with Azure Data Factory for orchestrating data workflows. Proficiency in Azure Synapse Analytics and SQL Server. Familiarity with Azure Data Lake Storage (Gen2) and Azure Blob Storage. Knowledge of Power BI integration and data modelling Understanding of Azure Functions and Logic Apps for automation. Snowflake: Strong SQL skills and experience … Python/Java-based transformations. Understanding of role-based access control, data masking, and time travel features. Databricks: Hands-on experience with Apache Spark and Databricks Runtime. Proficiency in DeltaLake for ACID-compliant data lakes. Experience with Structured Streaming and Auto Loader. Familiarity with MLflow, Feature Store, and Model Registry. Use of Databricks notebooks for collaborative development More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
mindset with a passion for modern data and platform technologies. Nice to Have: Experience implementing data governance and observability stacks (lineage, data contracts, quality monitoring). Knowledge of data lake formats (DeltaLake, Parquet, Iceberg, Hudi). Familiarity with containerisation and streaming technologies (Docker, Kubernetes, Kafka, Flink). Exposure to lakehouse or medallion architectures within Databricks. More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
london (city of london), south east england, united kingdom
Mastek
data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Involved Solutions
methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and Apache Spark, including performance tuning and advanced concepts such as DeltaLake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing and CI/CD pipelines Advanced knowledge of SQL and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
ECS
Azure with a core focus on Databricks Provide technical leadership on Databricks Design and implement data pipelines in Azure Databricks, optimizing for performance, cost, and scalability Advise of data lake migration, data modelling and pipeline optimisation Refactor and modernize existing data models to integrate with Power BI Coordinate phased migrations to ensure zero disruption to live operations Replace legacy … a strong focus on building scalable data pipelines Expertise in Azure Databricks (7+years) including building and managing ETL pipelines using PySpark or Scala (essential) Solid understanding of Apache Spark, DeltaLake, and distributed data processing concepts Hands-on experience with Azure Data Lake Storage Gen2, Azure Data Factory, and Azure Synapse Analytics Proficiency in SQL and Python More ❯
at the forefront of emerging technologies. Nice to Have Experience rolling out data governance and observability frameworks, including lineage tracking, SLAs, and data quality monitoring. Familiarity with modern data lake table formats such as DeltaLake, Iceberg, or Hudi. Background in stream processing (Kafka, Flink, or similar ecosystems). Exposure to containerisation and orchestration technologies such as More ❯
support advanced analytics, AI, and business intelligence use cases. Proven experience in designing architectures for structured, semi-structured, and unstructured data , leveraging technologies like Databricks, Snowflake, Apache Kafka , and DeltaLake to enable seamless data processing and analytics. Hands-on experience in data integration , including designing and optimising data pipelines (batch and streaming) and integrating cloud-based platforms More ❯
and minimize complexity Exceptional interpersonal skills - you communicate clearly with stakeholders as well as other engineers, fostering a collaborative, supportive working environment Experience in the financial markets, especially in delta one, store of value, and/or FICC options trading Experience with Linux-based, concurrent, high-throughput, low-latency software systems Experience with pipeline orchestration frameworks (e.g. Airflow, Dagster … Experience with streaming platforms (e.g. Kafka), data lake platforms (e.g. DeltaLake, Apache Iceberg), and relational databases Have a Bachelor or advanced degree in Computer Science, Mathematics, Statistics, Physics, Engineering, or equivalent work experience For more information about DRW's processing activities and our use of job applicants' data, please view our Privacy Notice at More ❯
data warehousing, or analytics engineering Strong SQL and Python skills with hands-on experience in PySpark Exposure to Azure Databricks, Microsoft Fabric, or similar cloud data platforms Understanding of DeltaLake, Git, and CI/CD workflows Experience with relational data modelling and dimensional modelling Awareness of data governance tools such as Purview or Unity Catalog Excellent analytical More ❯
years as a Data Engineer Hands-on with Databricks, Spark, Python, SQL Cloud experience (Azure, AWS, or GCP) Strong understanding of data quality, governance, and security Nice to Have: DeltaLake, DBT, Snowflake, Terraform, CI/CD, or DevOps exposure You'll: Build and optimise ETL pipelines Enable analytics and reporting teams Drive automation and best practices Why More ❯
machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, DeltaLake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, DeltaLake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, DeltaLake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, DeltaLake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
london (city of london), south east england, united kingdom
Miryco Consultants Ltd
machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, DeltaLake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
Warrington, Cheshire, England, United Kingdom Hybrid / WFH Options
Brookson
Science, Mathematics, Engineering or other STEM A strong team player with empathy, humility and dedication to joint success and shared development. Desirable Experience and Qualifications: Experience with Databricks or DeltaLake architecture. Experience building architecture and Data Warehousing within the Microsoft Stack Experience in development Source control (e.g. Bit Bucket, Github) Experience in Low Code Analytical Tools (e.g. More ❯
across SQL Server, PostgreSQL, and cloud databases Proven track record with complex data migration projects (terabyte+ datasets, multiple legacy source systems, structures and unstructured data) Proficiency with Parquet/DeltaLake or other modern data storage formats Experience with streaming architectures using Kafka, Event Hubs, or Kinesis for real-time data processing Knowledge of data architectures supporting AI More ❯
facing consulting environment, with the ability to manage stakeholder expectations, navigate complex requirements, and deliver tailored solutions across diverse industries. 5+ years' experience working with Databricks, including Spark and DeltaLake Strong skills in Python and/or Scala for data engineering tasks Comfortable working with cloud platforms like Azure, AWS, and/or Google Cloud A problem More ❯