Job Title: Senior Data Engineer (Azure, SparkSQL, Team Lead) Duration: long-term contract Location: Hybrid (3 days/week from Sheffield) Visa: UK Citizen/ILR/Dependent visa (No Visa sponsorship) We are looking for a Senior Data Engineer with hands-on expertise in … SQL/BigQuery migration , Azure Databricks , and SparkSQL , who also brings team leadership experience and thrives in Agile/SAFe/Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex … BigQuery SQL transformations to SparkSQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements More ❯
Financial Services Location: Hybrid (3 days/week from Sheffield) We are looking for a Senior Data Engineer with hands-on expertise in SQL/BigQuery migration , Azure Databricks , and SparkSQL , who also brings team leadership experience and thrives in Agile/SAFe … Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex BigQuery SQL transformations to SparkSQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements: Strong SQL skills (BigQuery SQL & SparkSQL), Python, and ETL pipeline development. Experience with Azure and cloud data tools. Familiarity More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
Graduate Data Engineer (Python SparkSQL) *Newcastle Onsite* to £33k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a … by minimum A A A grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, ApacheSpark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … a range of events and early finish for drinks on Fridays Apply now to find out more about this Graduate Data Engineer (Python SparkSQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Somerset Bridge Group
Hands-on experience in building ELT pipelines and working with large-scale datasets using Azure Data Factory (ADF) and Databricks. Strong proficiency in SQL (T-SQL, SparkSQL) for data extraction, transformation, and optimisation. Proficiency in Azure Databricks (PySpark, Delta Lake, SparkSQL) for big data processing. Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics. Experience working with Delta Lake for schema evolution, ACID transactions, and time travel in Databricks. Strong Python (PySpark) skills for big data processing and automation. Experience with … Scala (optional but preferred for advanced Spark applications). Experience working with Databricks Workflows & Jobs for data orchestration. Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference. Experience with data modelling techniques to support analytics and reporting. Familiarity with More ❯
Skipton, England, United Kingdom Hybrid / WFH Options
Skipton
Databricks, Data Factory, Storage, Key Vault Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (SparkSQL) & Python (PySpark) Certifications: Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) You will More ❯
CI/CD tools Key Technology: Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (SparkSQL) & Python (PySpark) Certifications: You will need to be you. Curious about technology and adaptable to new technologies Agile More ❯
Graduate Data Engineer (Python SparkSQL) *Newcastle Onsite* to £33k Experience, qualification, and soft skills, have you got everything required to succeed in this opportunity Find out below. Do you have a first class education combined with Data Engineering skills? You could be progressing your career More ❯
Exeter, England, United Kingdom Hybrid / WFH Options
MBN Solutions
governance techniques Good understanding of Quality and Information Security principles Experience with Azure, ETL Tools such as ADF and Databricks Advanced Database and SQL skills, along with SQL, Python, Pyspark, SparkSQL Strong understanding of data model design and implementation principles Data More ❯
South East London, England, United Kingdom Hybrid / WFH Options
La Fosse
technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, SparkSQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks More ❯
tools to manage the platform, ensuring resilience and optimal performance are maintained. Data Integration and Transformation Integrate and transform data from multiple organisational SQL databases and SaaS applications using end-to-end dependency-based data pipelines, to establish an enterprise source of truth. Create ETL and ELT processes … using Azure Databricks, ensuring audit-ready financial data pipelines and secure data exchange with Databricks Delta Sharing and SQL Warehouse endpoints. Governance and Compliance Ensure compliance with information security standards in our highly regulated financial landscape by implementing Databricks Unity Catalog for governance, data quality monitoring, and ADLS … architecture. Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog More ❯
will you bring to the table? 3+ experience as in DWH Developer or a related role Thorough understanding of Kimball and Snowflake models SQL is your first language :-) You know DBT, SparkSQL, Bigquery and Git A natural collaborator with excellent communication skills Able More ❯
that’s transforming how data powers retail, this is your opportunity. Your Role (Key Responsibilities) Design, build, and optimise robust data pipelines using PySpark, SparkSQL, and Databricks to ingest, transform, and enrich data from a variety of sources. Translate business requirements into scalable and performant data engineering solutions, working closely … principles are built into everything you do. About You (Experience & Qualifications) Experience building and maintaining data solutions. Experience with languages such as Python, SQL and, preferably, PySpark. Hands-on experience with Microsoft Azure data services, including Azure Data Factory, and a good understanding of cloud-based data architectures More ❯
that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data … technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for More ❯
that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data … technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for More ❯
engineering team. BASIC QUALIFICATIONS Experience in the data/BI space Experience with data visualization using Tableau, Quicksight, or similar tools Experience with SQL Experience working directly with business stakeholders to translate between data and business needs Experience using Cloud Storage and Computing technologies such as AWS Redshift … and programming languages such as R, Python, Ruby, etc. Master's degree in statistics, data science, or an equivalent quantitative field Experience with using SparkSQL for ETL workloads Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting More ❯
Social network you want to login/join with: GCP Data Engineer (Java, Spark, ETL), Slough Client: Staffworx Location: Slough, United Kingdom Job Category: Other - EU … work permit required: Yes Job Views: 1 Posted: 31.05.2025 Expiry Date: 15.07.2025 Job Description: Proficiency in programming languages such as Python, PySpark, and Java SparkSQL GCP BigQuery Version control tools (Git, GitHub), automated deployment tools Google Cloud Platform services, Pub/Sub, BigQuery Streaming, and related technologies Deep understanding of More ❯