Reading, Berkshire, United Kingdom Hybrid / WFH Options
Bowerford Associates
Degree in Computer Science, Software Engineering, or similar (applied to Data/Data Specialisation). Extensive experience in Data Engineering, in both Cloud & On-Prem, Big Data and Data Lake environments. Expert knowledge in data technologies, data transformation tools, data governance techniques. Strong analytical and problem-solving abilities. Good understanding of Quality and Information Security principles. Effective communication, ability … monitoring/security is necessary. Significant AWS or Azure hands-on experience. ETL Tools such as Azure Data Fabric (ADF) and Databricks or similar ones. Data Lakes: Azure Data, DeltaLake, Data Lake or Databricks Lakehouse. Certifications: AWS, Azure, or Cloudera certifications are a plus. To be considered for this role you MUST have in-depth experience … role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, DeltaLake, Data Lake Please note that due to a high level of applications, we can only respond to applicants whose skills and qualifications are suitable for this More ❯
Employment Type: Permanent
Salary: £75000 - £80000/annum Pension, Good Holiday, Healthcare
Leeds, West Yorkshire, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
for someone with strong Databricks expertise to join their team. About the role: Designing and developing robust data pipelines using Azure Data Factory, Databricks, and Synapse Analytics. Working with DeltaLake and Azure Data Lake Storage to manage and optimise large datasets. Collaborating with data analysts, engineers, and business stakeholders to deliver clean, reliable data. Supporting the … of legacy systems to a modern Azure-based architecture Ensuring best practices in data governance, security, and performance tuning Requirements: Proven experience with Azure Data Services (ADF, Synapse, Data Lake) Strong hands-on experience with Databricks (including PySpark or SQL) Solid SQL skills and understanding of data modelling and ETL/ELT processes Familiarity with DeltaLakeMore ❯
key role in the design and delivery of advanced Databricks solutions within the Azure ecosystem. Responsibilities: Design, build, and optimise end-to-end data pipelines using Azure Databricks, including Delta Live Tables. Collaborate with stakeholders to define technical requirements and propose Databricks-based solutions. Drive best practices for data engineering. Help clients realise the potential of data science, machine … Support with planning, requirements refinements, and work estimation. Skills & Experiences: Proven experience designing and implementing data solutions in Azure using Databricks as a core platform. Hands-on expertise in DeltaLake, Delta Live Tables and Databricks Workflows. Strong coding skills in Python and SQL, with experience in developing modular, reusable code in Databricks. Deep understanding of lakehouse More ❯
spoke data architectures , optimising for performance, scalability, and security. Collaborate with business stakeholders, data engineers, and analytics teams to ensure solutions are fit for purpose. Implement and optimise Databricks DeltaLake, Medallion Architecture, and Lakehouse patterns for structured and semi-structured data. Ensure best practices in Azure networking, security, and federated data access . Key Skills & Experience 5+ … years of experience in data architecture, solution architecture, or cloud data engineering roles. Strong expertise in Azure data services , including: Azure Databricks (DeltaLake, Unity Catalog, MLflow) Azure Event Hub and Azure Data Explorer for real-time and streaming data pipelines Azure Storage (ADLS Gen2, Blob Storage) for scalable data lakes Azure Purview or equivalent for data governance More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
on coding experience with Python or PySpark Proven expertise in building data pipelines using Azure Data Factory or Fabric Pipelines Solid experience with Azure technologies like Lakehouse Architecture, Data Lake, DeltaLake, and Azure Synapse Strong command of SQL Excellent communication and collaboration skills What's in It for You: Up to £60,000 salary depending on More ❯
on coding experience with Python or PySpark Proven expertise in building data pipelines using Azure Data Factory or Fabric Pipelines Solid experience with Azure technologies like Lakehouse Architecture, Data Lake, DeltaLake, and Azure Synapse Strong command of SQL Excellent communication and collaboration skills What's in It for You: Up to £60,000 salary depending on More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
MYO Talent
Data Engineer/Data Engineering/Lakehouse/DeltaLake/Data Warehousing/ETL/Azure/Azure Databricks/Python/SQL/Based in the West Midlands/Solihull/Birmingham area (1 day per week), Permanent role, £50,000 70,000 + car/allowance + bonus. One of our leading clients is looking … + car/allowance + bonus Experience: Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, DeltaLake Data Warehousing ETL CDC Stream Processing Database Design ML Python/PySpark Azure Blob Storage Parquet Azure Data Factory Desirable: Any exposure working in a software house, consultancy, retail More ❯
data estate, built primarily on Microsoft Azure and Databricks Key Responsibilities: Design and implement scalable, secure cloud-based data architecture (Azure & Databricks) Develop optimised data models and pipelines using DeltaLake and Azure services Define data standards, policies, and governance practices aligned with compliance Enable real-time analytics and machine learning use cases across business functions Ensure data … engineering and architecture Collaborate with internal stakeholders and third-party vendors Key Skills & Experience: Proven background designing and delivering enterprise-scale data platforms Strong knowledge of Microsoft Azure, Databricks, DeltaLake, and data warehousing Advanced data modelling and ETL/ELT optimisation experience Familiarity with regulatory frameworks such as IFRS 17 , BCBS 239 , and UK Data Protection Excellent More ❯
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Unolabs
being phased out by late 2025. This project involves migrating pipelines and platform infrastructure running on PVC to the Databricks Enterprise Edition (E2) using EC2-based compute, which enables DeltaLake, advanced security, and scalable architecture. The project also includes decomposing and migrating core components of a centralized automation framework, a multi-repository CI/CD system that … cloud-native environments. Strong experience with AWS services including EC2, IAM, ECR, S3, and Autoscaling. Deep knowledge of Databricks, including workspace setup, instance profiles, notebook CI/CD, and DeltaLake capabilities. Hands-on with Terraform for infrastructure provisioning and GitLab/Jenkins for pipeline orchestration. Experience with Docker, container image lifecycle, and secure CI/CD container More ❯
and Lakehouse architectures to ensure our data solutions are secure, efficient, and optimized. Key Responsibilities: Design and implement data solutions using Azure services, including Azure Databricks, ADF, and Data Lake Storage. Develop and maintain ETL/ELT pipelines to process structured and unstructured data from multiple sources. - Automate loads using Databricks workflows and Jobs Develop, test and build CI … modeling, warehousing, and real-time streaming. Knowledge of developing and processing full and incremental loads. Experience of automated loads using Databricks workflows and Jobs Expertise in Azure Databricks, including DeltaLake, Spark optimizations, and MLflow. Strong experience with Azure Data Factory (ADF) for data integration and orchestration. Hands-on experience with Azure DevOps, including pipelines, repos, and infrastructure More ❯
Spark • Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, Spark SQL) • Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables • Solid understanding of data warehousing principles, ETL/ELT processes, data modeling and techniques, and database systems • Proven experience with at least one major cloud platform (Azure … and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based data lakes and warehouses • Leveraging the Databricks ecosystem (SQL, DeltaLake, Workflows, Unity Catalog) to deliver reliable and performant data workflows • Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
years of experience in a data engineering or similar technical role Hands-on experience with key Microsoft Azure services: Azure Data Factory Azure Synapse Analytics Azure Data Lake Storage Azure SQL Database Solid understanding of data modeling, ETL/ELT, and warehousing concepts Proficiency in SQL and one or more programming languages (e.g., Python, Scala) Exposure to Microsoft Fabric … Familiarity with software testing methodologies and development team collaboration Experience working with Power BI and DAX Strong documentation, communication, and stakeholder engagement skills Preferred Qualifications: Experience with Lakehouse architecture, DeltaLake, or Databricks Exposure to Agile/Scrum working practices Microsoft certifications (e.g., Azure Data Engineer Associate) Background in consulting or professional services Understanding of data governance and More ❯
AWS, GCP) including AWS primitives such as IAM, S3, RDS, EMR, ECS and more Advanced experience working and understanding the tradeoffs of at least one of the following Data Lake table/file formats: DeltaLake, Parquet, Iceberg, Hudi Previous h ands-on expertise with Spark Experience working with containerisation technologies - Docker, Kubernetes Streaming Knowledge: Experience with More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
IO Associates
Engineer on cloud-based projects Strong hands-on skills with Databricks , Apache Spark , and Python or Scala Proficient in SQL and working with large-scale data environments Experience with DeltaLake , Azure Data Lake , or similar technologies Familiarity with version control, CI/CD, and infrastructure-as-code tools Ability to deliver independently in a contractor capacity More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
WüNDER TALENT
solution design. Requirements Proven experience as a Data Engineer in cloud-first environments. Strong commercial knowledge of AWS services (e.g. S3, Glue, Redshift). Advanced PySpark and Databricks experience (DeltaLake, Unity Catalog, Databricks Jobs etc). Proficient in SQL (T-SQL/SparkSQL) and Python for data transformation and scripting. Hands-on experience with workflow orchestration tools More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
The Citation Group
with IaC tools like Terraform or CloudFormation. Experience with workflow orchestration tools (e.g., Airflow, Dagster). Good understanding of Cloud providers – AWS, Microsoft Azure, Google Cloud Familiarity with DBT, DeltaLake, Databricks Experience working in Agile environments with tools like Jira and Git. About Us We are Citation. We are far from your average service provider. Our colleagues More ❯
drive business transformation. You'll also contribute to best practice implementation and continuous improvement within cross-functional engineering teams. What You'll Do Design and develop robust pipelines using DeltaLake, Spark Structured Streaming, and Unity Catalog Build real-time event-driven solutions with tools such as Kafka and Azure Event Hubs Apply DevOps principles to develop CI/CD pipelines More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Somerset Bridge
the live environment. What you'll be responsible for: Develop and maintain API services using Databricks and Azure. Implement and manage Azure Cache (Redis) and Azure Redis. Utilize Databricks Delta Live tables for data processing and analytics. Integrate the platform with Snowflake for data storage and retrieval. Collaborate with cross-functional teams to deliver the platform in an agile … deployment. What you'll need: Experience in ML Ops engineering, with a focus on Azure and Databricks. Knowledge of Postgres, Azure Cache (Redis) and Azure Redis. Experience with Databricks Delta Live tables and Snowflake. Experience in Data (Delta) Lake Architecture. Experience with Docker and Azure Container Services. Familiarity with API service development and orchestration. Strong problem-solving More ❯
months A client of mine is looking for a Data Engineer to help maintain and enhance their existing cloud-based data platform. The core migration to a Databricks Delta Lakehouse on AWS has already been completed, so the focus will be on improving pipeline performance, supporting analytics, and contributing to ongoing platform development. Key Responsibilities: - Maintain and optimise existing … usable datasets - Contribute to good data governance, CI/CD workflows, and engineering standards - Continue developing your skills in PySpark, Databricks, and AWS-based tools Tech Stack Includes: - Databricks (DeltaLake, PySpark) - AWS - CI/CD tooling (Git, DevOps pipeline - Cloud-based data warehousing and analytics tools If your a mid to snr level Data Engineer feel free More ❯
technology, and security to ensure data flows securely and efficiently from external providers into our financial platforms. Key Responsibilities Develop and maintain scalable data pipelines using Databricks, Spark, and DeltaLake to process large volumes of structured and semi-structured data. Design ETL/ELT workflows to extract data from third-party APIs and SFTP sources, standardise and … meet appropriate standards for security, resilience and operational support. Skills & Experience Required Essential: Hands-on experience developing data pipelines in Databricks, with a strong understanding of Apache Spark and Delta Lake. Proficient in Python for data transformation and automation tasks. Solid understanding of AWS services, especially S3, Transfer Family, IAM, and VPC networking. Experience integrating data from external APIs More ❯
with the ability to manage stakeholder expectations, navigate complex requirements, and deliver tailored solutions across diverse industries. 7 years' experience working with Databricks. Good hands on experience with Spark, DeltaLake, and Unity Catalog Strong understanding of cloud platforms like Azure, AWS and/or Google Cloud Experience designing data lakes, lakehouses, and modern data platforms Proven experience More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
CODEVERSE LIMITED
best practices , architecture documents, and runbooks Act as SME , collaborating across teams and representing the Databricks function in architecture reviews Key Skills: Expert in Azure Databricks administration , Unity Catalog, DeltaLake, Spark Strong knowledge of Azure Data Factory , ADLS Gen2 , Key Vault , and Azure networking Skilled in CI/CD automation (Azure DevOps, GitHub Actions), scripting (Python, PowerShell More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Somerset Bridge
actions. Collaborate with cross-functional teams to deliver the platform in an agile manner. Provide guidance on the implementation and management of Azure Cache (Redis), Postgres, Azure Redis, Databricks Delta Live tables, and Snowflake. Ensure the platform supports microservices and API-driven architecture with sub-2-second calls. Develop and maintain documentation, architecture diagrams, and other technical artifacts. Manage … you'll need: Proven experience in ML Ops engineering, with a focus on Azure and Databricks. Strong knowledge of Postgres, Azure Cache (Redis) and Azure Redis. Experience with Databricks Delta Live tables and Snowflake. Experience with Docker and Azure Container Services. Familiarity with API service development and orchestration. Experience in Data (Delta) Lake Architecture (Azure). Excellent More ❯
Columbia, South Carolina, United States Hybrid / WFH Options
Systemtec Inc
Columbia, SC. Candidates must have experience in big data technologies and cloud-based technologies AWS Services, State Machines, CDK, Glue, TypeScript, CloudWatch, Lambda, CloudFormation, S3, Glacier Archival Storage, DataSync, Lake Formation, AppFlow, RDS PostgreSQL, Aurora, Athena, Amazon MSK, Apache Iceberg, Spark, Python ONSITE: Partially onsite 3 days per week (Tue, Wed, Thurs) and as needed. Standard work hours … experience plus an associate's degree in Computer Science, Information Technology or other job related degree 6 years-of application development, systems testing Nice to have: AWS Redshift, Databricks- deltaLake, Unity catalog, Data Engineering and processing using Databricks, AI and Machine Learning Amazon Bedrock, AWS Sagemaker, Unified Studio, R Studio/Posit Workbench, R Shiny/Posit More ❯
Ashburn, Virginia, United States Hybrid / WFH Options
Adaptive Solutions, LLC
prototype to production • Minimum of 3 years' experience building and deploying scalable, production-grade AI/ML pipelines in AWS and Databricks • Practical knowledge of tools such as MLflow, DeltaLake, and Apache Spark for pipeline development and model tracking • Experience architecting end-to-end ML solutions, including feature engineering, model training, deployment, and ongoing monitoring • Familiarity with More ❯