modern data platforms and engineering practices. Key competencies include: Databricks Platform Expertise : Proven experience designing and delivering data solutions using Databricks on Azure or AWS. Databricks Components : Proficient in DeltaLake, Unity Catalog, MLflow, and other core Databricks tools. Programming & Query Languages : Strong skills in SQL and Apache Spark (Scala or Python). Relational Databases : Experience with on More ❯
data retention rules, and privacy regulations. Required Skills and Experience 5+ years of experience in data engineering or similar roles. Strong experience with Databricks , including notebooks, cluster configuration, and Delta Lake. Proficiency in dbt for transformation logic and version-controlled data modeling. Deep knowledge of Azure Data Factory , including pipeline orchestration and integration with other Azure services. Experience with More ❯
in building and deploying modern data solutions based on Azure Databricks, enabling faster and more informed business decisions. You'll work hands-on with Azure Databricks, Azure Data Factory, DeltaLake, and Power BI to design scalable data pipelines, implement efficient data models, and ensure high-quality data delivery. This is a great opportunity to shape the future … within the organisation while working with advanced cloud technologies. Key Responsibilities and Deliverables Design, develop, and optimise end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake. Implement Medallion Architecture to structure raw, enriched, and curated data layers efficiently. Build scalable ETL/ELT processes with Azure Data Factory and PySpark. Support data governance initiatives … Collaborate with analysts to validate and refine datasets for reporting. Apply DevOps and CI/CD best practices (Git, Azure DevOps) for automated testing and deployment. Optimise Spark jobs, DeltaLake tables, and SQL queries for performance and cost-effectiveness. Troubleshoot and proactively resolve data pipeline issues. Partner with data architects, analysts, and business teams to deliver end More ❯
fast-growing organisation. Key Responsibilities: Design, develop, and maintain scalable data pipelines using SQL and Python (PySpark) . Ingest, transform, and curate data from multiple sources into Azure Data Lake and DeltaLake formats. Build and optimize datasets for performance and reliability in Azure Databricks . Collaborate with analysts and business stakeholders to translate data requirements into … Skills & Experience: Strong proficiency in SQL for data transformation and performance tuning. Solid experience with Python , ideally using PySpark in Azure Databricks . Hands-on experience with Azure Data Lake Storage Gen2 . Understanding of data warehouse concepts , dimensional modelling , and data architecture . Experience working with DeltaLake and large-scale data processing. Experience building ETL More ❯
key role in the design and delivery of advanced Databricks solutions within the Azure ecosystem. Responsibilities: Design, build, and optimise end-to-end data pipelines using Azure Databricks, including Delta Live Tables. Collaborate with stakeholders to define technical requirements and propose Databricks-based solutions. Drive best practices for data engineering. Help clients realise the potential of data science, machine … Support with planning, requirements refinements, and work estimation. Skills & Experiences: Proven experience designing and implementing data solutions in Azure using Databricks as a core platform. Hands-on expertise in DeltaLake, Delta Live Tables and Databricks Workflows. Strong coding skills in Python and SQL, with experience in developing modular, reusable code in Databricks. Deep understanding of lakehouse More ❯
s data engineering capabilities as they scale their team and client base. Key Responsibilities: Architect and implement end-to-end, scalable data and AI solutions using the Databricks Lakehouse (DeltaLake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the adoption of Lakehouse architecture (bronze More ❯
to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing technologies like Databricks, Spark, DeltaLake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the … to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing technologies like Databricks, Spark, DeltaLake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
first, modern data strategy within a collaborative and forward-thinking environment. Key Responsibilities: Design and develop end-to-end data pipelines (batch and streaming) using Azure Databricks, Spark, and Delta Lake. Implement the Medallion Architecture and ensure consistency across raw, enriched, and curated data layers. Build and optimise ETL/ELT processes using Azure Data Factory and PySpark. Enforce … stakeholders to ensure data quality and usability. Contribute to performance optimisation and cost efficiency across data solutions. Required Skills & Experience: Proven hands-on experience with Azure Databricks, Data Factory, DeltaLake, and Synapse. Strong proficiency in Python, PySpark, and advanced SQL. Understanding of Lakehouse architecture and medallion data patterns. Familiarity with data governance, lineage, and access control tools. More ❯
least 10 years' experience in Business Intelligence, with 5+ years in a BI leadership role in a global or matrixed organisation . Proven expertise in modern BI architecture (Data Lake, EDW, Streaming, APIs, Real-Time & Batch Processing). Demonstrated experience delivering cloud-based analytics platforms (Azure, AWS, GCP). Strong knowledge of data governance, cataloguing, security, automation, and self … The Head of Data Engineering & Insight will work within a modern, cloud-based BI ecosystem , including: Data Integration: Fivetran , HVR, Databricks , Apache Kafka, Google BigQuery , Google Analytics 4 Data Lake & Storage: Databricks DeltaLake, Amazon S3 Data Transformation: dbt Cloud Data Warehouse: Snowflake Analytics & Reporting: Power BI, Excel, Snowflake SQL REST API Advanced Analytics: Databricks (AI & Machine More ❯
least 10 years' experience in Business Intelligence, with 5+ years in a BI leadership role in a global or matrixed organisation . Proven expertise in modern BI architecture (Data Lake, EDW, Streaming, APIs, Real-Time & Batch Processing). Demonstrated experience delivering cloud-based analytics platforms (Azure, AWS, GCP). Strong knowledge of data governance, cataloguing, security, automation, and self … The Head of Data Engineering & Insight will work within a modern, cloud-based BI ecosystem , including: Data Integration: Fivetran , HVR, Databricks , Apache Kafka, Google BigQuery , Google Analytics 4 Data Lake & Storage: Databricks DeltaLake, Amazon S3 Data Transformation: dbt Cloud Data Warehouse: Snowflake Analytics & Reporting: Power BI, Excel, Snowflake SQL REST API Advanced Analytics: Databricks (AI & Machine More ❯
Databricks. You’ll work with clients and internal teams to deliver scalable, efficient data solutions tailored to business needs. Key Responsibilities Develop ETL/ELT pipelines with Databricks and DeltaLake Integrate and process data from diverse sources Collaborate with data scientists, architects, and analysts Optimize performance and manage Databricks clusters Build cloud-native solutions (Azure preferred, AWS … architecture and processes What We’re Looking For Required: 5+ years in data engineering with hands-on Databricks experience Databricks Champion Status (Solution Architect/Partner) Proficient in Databricks, DeltaLake, Spark, Python, SQL Cloud experience (Azure preferred, AWS/GCP a plus) Strong problem-solving and communication skills Databricks Champion More ❯
Data Pipeline Development: Design and implement end-to-end data pipelines in Azure Databricks, handling ingestion from various data sources, performing complex transformations, and publishing data to Azure Data Lake or other storage services. Write efficient and standardized Spark SQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline. Automate pipeline orchestration using Databricks … various sources (APIs, databases, file systems). Implement data transformation logic using Spark, ensuring data is cleaned, transformed, and enriched according to business requirements. Leverage Databricks features such as DeltaLake to manage and track changes to data, enabling better versioning and performance for incremental data loads. Data Publishing & Integration: Publish clean, transformed data to Azure Data Lake … for data transformation and processing within Databricks, along with experience building workflows and automation using Databricks Workflows. Azure Data Services: Hands-on experience with Azure services like Azure Data Lake, Azure Blob Storage, and Azure Synapse for data storage, processing, and publication. Data Governance & Security: Familiarity with managing data governance and security using Databricks Unity Catalog, ensuring data is More ❯
Data Pipeline Development: Design and implement end-to-end data pipelines in Azure Databricks, handling ingestion from various data sources, performing complex transformations, and publishing data to Azure Data Lake or other storage services. Write efficient and standardized Spark SQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline. Automate pipeline orchestration using Databricks … various sources (APIs, databases, file systems). Implement data transformation logic using Spark, ensuring data is cleaned, transformed, and enriched according to business requirements. Leverage Databricks features such as DeltaLake to manage and track changes to data, enabling better versioning and performance for incremental data loads. Data Publishing & Integration: Publish clean, transformed data to Azure Data Lake … for data transformation and processing within Databricks, along with experience building workflows and automation using Databricks Workflows. Azure Data Services: Hands-on experience with Azure services like Azure Data Lake, Azure Blob Storage, and Azure Synapse for data storage, processing, and publication. Data Governance & Security: Familiarity with managing data governance and security using Databricks Unity Catalog, ensuring data is More ❯
Data Platform and Services, you'll not only maintain and optimize our data infrastructure but also spearhead its evolution. Built predominantly on Databricks, and utilizing technologies like Pyspark and DeltaLake, our infrastructure is designed for scalability, robustness, and efficiency. You'll take charge of developing sophisticated data integrations with various advertising platforms, empowering our teams with data … decision-making What you'll be doing for us Leadership in Design and Development : Lead in the architecture, development, and upkeep of our Databricks-based infrastructure, harnessing Pyspark and Delta Lake. CI/CD Pipeline Mastery : Create and manage CI/CD pipelines, ensuring automated deployments and system health monitoring. Advanced Data Integration : Develop sophisticated strategies for integrating data More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
to-end, making meaningful contributions within a small, agile team. Experience We're looking for candidates with: Extensive experience in Data Engineering with a focus on Azure, Databricks, and Delta Lake. Proficiency in Kubernetes, Infrastructure as Code, and Terraform. Expertise in Azure DevOps and a commitment to best practices. A preference for simple, transparent solutions and a drive for More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
Degree in Computer Science, Software Engineering, or similar (applied to Data/Data Specialisation). Extensive experience in Data Engineering, in both Cloud & On-Prem, Big Data and Data Lake environments. Expert knowledge in data technologies, data transformation tools, data governance techniques. Strong analytical and problem-solving abilities. Good understanding of Quality and Information Security principles. Effective communication, ability … monitoring/security is necessary. Significant AWS or Azure hands-on experience. ETL Tools such as Azure Data Fabric (ADF) and Databricks or similar ones. Data Lakes: Azure Data, DeltaLake, Data Lake or Databricks Lakehouse. Certifications: AWS, Azure, or Cloudera certifications are a plus. The role comes with an extensive benefits package including a good pension … role. KEYWORDS Lead Data Engineer, Senior Lead Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, On-Prem, Cloud, ETL, Azure Data Fabric, ADF, Databricks, Azure Data, DeltaLake, Data Lake. Please note that due to a high level of applications, we can only respond to applicants whose skills and qualifications are suitable for this position. More ❯
based data solutions using Databricks , Python , Spark , and Kafka -working on both greenfield initiatives and enhancing high-traffic financial applications. Key Skills & Experience: Strong hands-on experience with Databricks , DeltaLake , Spark Structured Streaming , and Unity Catalog Advanced Python/PySpark and big data pipeline development Familiar with event streaming tools ( Kafka , Azure Event Hubs ) Solid understanding of More ❯
real interest in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising Azure-based data pipelines using Databricks, PySpark, ADF, and DeltaLake Implementing a medallion architecture - from raw to curated Collaborating with analysts to make data business-ready Applying CI/CD and DevOps best practices (Git, Azure DevOps More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
on coding experience with Python or PySpark Proven expertise in building data pipelines using Azure Data Factory or Fabric Pipelines Solid experience with Azure technologies like Lakehouse Architecture, Data Lake, DeltaLake, and Azure Synapse Strong command of SQL Excellent communication and collaboration skills What's in It for You: Up to £60,000 salary depending on More ❯
on coding experience with Python or PySpark Proven expertise in building data pipelines using Azure Data Factory or Fabric Pipelines Solid experience with Azure technologies like Lakehouse Architecture, Data Lake, DeltaLake, and Azure Synapse Strong command of SQL Excellent communication and collaboration skills What's in It for You: Up to £60,000 salary depending on More ❯
Our platform unifies data, analytics, and AI, enabling organizations to democratize data access and insights. Headquartered in San Francisco, Databricks was founded by the creators of Lakehouse, Apache Spark, DeltaLake, and MLflow. To learn more, follow us on Twitter, LinkedIn, and Facebook. Benefits We offer comprehensive benefits tailored to regional needs. For details, visit our benefits page More ❯
unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the More ❯
Kafka, or other data processing frameworks or platforms like Databricks, Snowflake. Knowledge of data governance , data security practices, and best practices for managing large data sets that use Iceberg, Delta Lake. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). If you're a proactive, innovative, and results-driven engineer passionate about building powerful data-driven systems and More ❯