Job Title: Senior Data Engineer (Azure, SparkSQL, Team Lead) Duration: long-term contract Location: Hybrid (3 days/week from Sheffield) Visa: UK Citizen/ILR/Dependent visa (No Visa sponsorship) We are looking for a Senior Data Engineer with hands-on expertise in … SQL/BigQuery migration , Azure Databricks , and SparkSQL , who also brings team leadership experience and thrives in Agile/SAFe/Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex … BigQuery SQL transformations to SparkSQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements More ❯
Job Title: Senior Data Engineer (Azure, SparkSQL, Team Lead) Duration: long-term contract Location: Hybrid (3 days/week from Sheffield) Visa: UK Citizen/ILR/Dependent visa (No Visa sponsorship) We are looking for a Senior Data Engineer with hands-on expertise in … SQL/BigQuery migration , Azure Databricks , and SparkSQL , who also brings team leadership experience and thrives in Agile/SAFe/Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex … BigQuery SQL transformations to SparkSQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements More ❯
Job Title: Senior Data Engineer (Azure, SparkSQL, Team Lead) Duration: long-term contract Location: Hybrid (3 days/week from Sheffield) Visa: UK Citizen/ILR/Dependent visa (No Visa sponsorship) We are looking for a Senior Data Engineer with hands-on expertise in … SQL/BigQuery migration , Azure Databricks , and SparkSQL , who also brings team leadership experience and thrives in Agile/SAFe/Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex … BigQuery SQL transformations to SparkSQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements More ❯
Financial Services Location: Hybrid (3 days/week from Sheffield) We are looking for a Senior Data Engineer with hands-on expertise in SQL/BigQuery migration , Azure Databricks , and SparkSQL , who also brings team leadership experience and thrives in Agile/SAFe … Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex BigQuery SQL transformations to SparkSQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements: Strong SQL skills (BigQuery SQL & SparkSQL), Python, and ETL pipeline development. Experience with Azure and cloud data tools. Familiarity More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
Graduate Data Engineer (Python SparkSQL) *Newcastle Onsite* to £33k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a … by minimum A A A grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, ApacheSpark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … a range of events and early finish for drinks on Fridays Apply now to find out more about this Graduate Data Engineer (Python SparkSQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Somerset Bridge Group
Hands-on experience in building ELT pipelines and working with large-scale datasets using Azure Data Factory (ADF) and Databricks. Strong proficiency in SQL (T-SQL, SparkSQL) for data extraction, transformation, and optimisation. Proficiency in Azure Databricks (PySpark, Delta Lake, SparkSQL) for big data processing. Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics. Experience working with Delta Lake for schema evolution, ACID transactions, and time travel in Databricks. Strong Python (PySpark) skills for big data processing and automation. Experience with … Scala (optional but preferred for advanced Spark applications). Experience working with Databricks Workflows & Jobs for data orchestration. Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference. Experience with data modelling techniques to support analytics and reporting. Familiarity with More ❯
Salary: 50.000 - 60.000 € per year Requirements: • 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark • Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, SparkSQL) • Experience with core components of the Databricks … ELT processes, data modeling and techniques, and database systems • Proven experience with at least one major cloud platform (Azure, AWS, or GCP) • Excellent SQL skills for data querying, transformation, and analysis • Excellent communication and collaboration skills in English and German (min. B2 levels) • Ability to work independently as … work hands-on with the Databricks platform, supporting clients in solving complex data challenges. • Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python • Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based More ❯
independently • Leading a team of Data Engineers and deliver solutions as a team Key skills/knowledge/experience: • Proficient in PySpark, Python, SQL with atleast 5 years of experience • Working experience in Palantir Foundry platform is must • Experience designing and implementing data analytics solutions on enterprise data … platforms and distributed computing (Spark/Hive/Hadoop preferred). • Proven track record of understanding and transforming customer requirements into a best-fit design and architecture. • Demonstrated experience in end-to-end data management, data modelling, and data transformation for analytical use cases. • Proficient in SQL (SparkSQL preferred). • Experience with JavaScript/HTML/CSS a plus. Experience working in a Cloud environment such as Azure or AWS is a plus. • Experience with Scrum/Agile development methodologies. • At least 7 years of experience working with large scale More ❯
Skipton, England, United Kingdom Hybrid / WFH Options
Skipton
Databricks, Data Factory, Storage, Key Vault Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (SparkSQL) & Python (PySpark) Certifications: Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) You will More ❯
CI/CD tools Key Technology: Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (SparkSQL) & Python (PySpark) Certifications: You will need to be you. Curious about technology and adaptable to new technologies Agile More ❯
architectures that business engineering teams buy into and build their applications around. Required Qualifications, Capabilities, and Skills: Experience across the data lifecycle with Spark-based frameworks for end-to-end ETL, ELT & reporting solutions using key components like SparkSQL & Spark Streaming. … end-to-end engineering experience supported by excellent tooling and automation. Preferred Qualifications, Capabilities, and Skills: Good understanding of the Big Data stack (Spark/Iceberg). Ability to learn new technologies and patterns on the job and apply them effectively. Good understanding of established patterns, such as More ❯
Graduate Data Engineer (Python SparkSQL) *Newcastle Onsite* to £33k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a More ❯
across teams. Key Technologies (awareness of) Azure Databricks, Data Factory, Storage, Key Vault Source control systems, such as Git dbt (Data Build Tool), SQL (SparkSQL), Python (PySpark) Certifications (Ideal) SAFe POPM or Scrum PSP Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure More ❯
ingestion from various data sources, performing complex transformations, and publishing data to Azure Data Lake or other storage services. Write efficient and standardized SparkSQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline. Automate pipeline orchestration using Databricks Workflows or … ingestion processes to handle structured, semi-structured, and unstructured data from various sources (APIs, databases, file systems). Implement data transformation logic using Spark, ensuring data is cleaned, transformed, and enriched according to business requirements. Leverage Databricks features such as Delta Lake to manage and track changes to … as encryption at rest and in transit, and role-based access control (RBAC) within Azure Databricks and Azure services. Performance Tuning & Optimization: Optimize Spark jobs for performance by tuning configurations, partitioning data, and caching intermediate results to minimize processing time and resource consumption. Continuously monitor and improve pipeline More ❯
experience, or equivalent demonstrated through one or a combination 7+ years of software engineering experience - Default Required 3+ years of experience working in Spark, Hadoop and Big Data experience 3+ years of experience working on SparkSQL, Streaming and data frame/dataset API … experience 3+ years of experience working on Spark query tuning and performance optimization Deep understanding of Hadoop/Cloud platforms, HDFS, ETL/ELT process and Unix shell scripting 3+ years of experience working with Relationship Database Management Systems (RDBMS) such as SQL Server, Oracle or … MySQL 3+ years of experience with SQL & NOSQL database integration with Spark (MS SQL server and MongoDB) 2+ years of Agile experience What You Will Be Doing Consult on or participate in moderately complex initiatives and deliverables within Software Engineering and contribute to large More ❯
Job Title: Data Engineer (Databricks, Python, SQL) Location: Remote Contract Type: Outside IR35 Start Date: ASAP About Us: Toucanberry Tech is a boutique software consultancy delivering high-impact solutions to the financial sector. We work in agile, cross-functional teams to solve complex challenges with speed, autonomy, and … production-ready code . Strong communication skills and the ability to explain technical decisions clearly are key. Key Responsibilities: Develop and maintain dbt SQL models and macros Collaborate with stakeholders to understand and refine modeling requirements Debug and optimize existing models Design ETL/ELT workflows and data … Work within Azure Databricks and follow code-based deployment practices Must-Have Skills: 3 years of experience with Databricks (Lakehouse, Delta Lake, PySpark, SparkSQL) Strong SQL skills (5 years) Experience with Azure, focusing on Databricks Excellent client-facing communication skills Experience deploying Databricks More ❯
Exeter, England, United Kingdom Hybrid / WFH Options
MBN Solutions
governance techniques Good understanding of Quality and Information Security principles Experience with Azure, ETL Tools such as ADF and Databricks Advanced Database and SQL skills, along with SQL, Python, Pyspark, SparkSQL Strong understanding of data model design and implementation principles Data More ❯
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
london (city of london), south east england, united kingdom
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, SparkSQL) Experience with core components More ❯
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, SparkSQL) Experience with core components More ❯