London, South East, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
first, modern data strategy within a collaborative and forward-thinking environment. Key Responsibilities: Design and develop end-to-end data pipelines (batch and streaming) using Azure Databricks, Spark, and Delta Lake. Implement the Medallion Architecture and ensure consistency across raw, enriched, and curated data layers. Build and optimise ETL/ELT processes using Azure Data Factory and PySpark. Enforce … stakeholders to ensure data quality and usability. Contribute to performance optimisation and cost efficiency across data solutions. Required Skills & Experience: Proven hands-on experience with Azure Databricks, Data Factory, DeltaLake, and Synapse. Strong proficiency in Python, PySpark, and advanced SQL. Understanding of Lakehouse architecture and medallion data patterns. Familiarity with data governance, lineage, and access control tools. More ❯
real interest in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising Azure-based data pipelines using Databricks, PySpark, ADF, and DeltaLake Implementing a medallion architecture - from raw to curated Collaborating with analysts to make data business-ready Applying CI/CD and DevOps best practices (Git, Azure DevOps More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
Degree in Computer Science, Software Engineering, or similar (applied to Data/Data Specialisation). Extensive experience in Data Engineering, in both Cloud & On-Prem, Big Data and Data Lake environments. Expert knowledge in data technologies, data transformation tools, data governance techniques. Strong analytical and problem-solving abilities. Good understanding of Quality and Information Security principles. Effective communication, ability … monitoring/security is necessary. Significant AWS or Azure hands-on experience. ETL Tools such as Azure Data Fabric (ADF) and Databricks or similar ones. Data Lakes: Azure Data, DeltaLake, Data Lake or Databricks Lakehouse. Certifications: AWS, Azure, or Cloudera certifications are a plus. The role comes with an extensive benefits package including a good pension … role. KEYWORDS Lead Data Engineer, Senior Lead Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, On-Prem, Cloud, ETL, Azure Data Fabric, ADF, Databricks, Azure Data, DeltaLake, Data Lake. Please note that due to a high level of applications, we can only respond to applicants whose skills and qualifications are suitable for this position. More ❯
Data Pipeline Development: Design and implement end-to-end data pipelines in Azure Databricks, handling ingestion from various data sources, performing complex transformations, and publishing data to Azure Data Lake or other storage services. Write efficient and standardized Spark SQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline. Automate pipeline orchestration using Databricks … various sources (APIs, databases, file systems). Implement data transformation logic using Spark, ensuring data is cleaned, transformed, and enriched according to business requirements. Leverage Databricks features such as DeltaLake to manage and track changes to data, enabling better versioning and performance for incremental data loads. Data Publishing & Integration: Publish clean, transformed data to Azure Data Lake … for data transformation and processing within Databricks, along with experience building workflows and automation using Databricks Workflows. Azure Data Services: Hands-on experience with Azure services like Azure Data Lake, Azure Blob Storage, and Azure Synapse for data storage, processing, and publication. Data Governance & Security: Familiarity with managing data governance and security using Databricks Unity Catalog, ensuring data is More ❯
based data solutions using Databricks , Python , Spark , and Kafka -working on both greenfield initiatives and enhancing high-traffic financial applications. Key Skills & Experience: Strong hands-on experience with Databricks , DeltaLake , Spark Structured Streaming , and Unity Catalog Advanced Python/PySpark and big data pipeline development Familiar with event streaming tools ( Kafka , Azure Event Hubs ) Solid understanding of More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
to-end, making meaningful contributions within a small, agile team. Experience We're looking for candidates with: Extensive experience in Data Engineering with a focus on Azure, Databricks, and Delta Lake. Proficiency in Kubernetes, Infrastructure as Code, and Terraform. Expertise in Azure DevOps and a commitment to best practices. A preference for simple, transparent solutions and a drive for More ❯
stakeholders. Expertise in designing and documenting data architectures (e.g., data warehouses, lakehouses, master/reference data models). Hands-on experience with Azure Databricks, including: Workspace and cluster configuration. DeltaLake table design and optimization. Integration with Unity Catalog for metadata management. Proficiency with Unity Catalog, including: Setting up data lineage and governance policies. Managing access controls and More ❯
Jenkins). Familiarity with large-scale data management and engineering best practices. Bonus Points For Workflow orchestration tools like Airflow. Working knowledge of Kafka and Kafka Connect. Experience with DeltaLake and lakehouse architectures. Proficiency in data serialization formats: JSON, XML, PARQUET, YAML. Cloud-based data services experience. Ready to build the future of data? If you're More ❯