Financial Services Location: Hybrid (3 days/week from Sheffield) We are looking for a Senior Data Engineer with hands-on expertise in SQL/BigQuery migration , Azure Databricks , and SparkSQL , who also brings team leadership experience and thrives in Agile/SAFe … Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex BigQuery SQL transformations to SparkSQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements: Strong SQL skills (BigQuery SQL & SparkSQL), Python, and ETL pipeline development. Experience with Azure and cloud data tools. Familiarity More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
Graduate Data Engineer (Python SparkSQL) *Newcastle Onsite* to £33k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a … by minimum A A A grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, ApacheSpark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … a range of events and early finish for drinks on Fridays Apply now to find out more about this Graduate Data Engineer (Python SparkSQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Somerset Bridge
Hands-on experience in building ELT pipelines and working with large-scale datasets using Azure Data Factory (ADF) and Databricks. Strong proficiency in SQL (T-SQL, SparkSQL) for data extraction, transformation, and optimisation. Proficiency in Azure Databricks (PySpark, Delta Lake, SparkSQL) for big data processing. Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics. Experience working with Delta Lake for schema evolution, ACID transactions, and time travel in Databricks. Strong Python (PySpark) skills for big data processing and automation. Experience with … Scala (optional but preferred for advanced Spark applications). Experience working with Databricks Workflows & Jobs for data orchestration. Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference. Experience with data modelling techniques to support analytics and reporting. Familiarity with More ❯
Salary: 50.000 - 60.000 € per year Requirements: • 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark • Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, SparkSQL) • Experience with core components of the Databricks … ELT processes, data modeling and techniques, and database systems • Proven experience with at least one major cloud platform (Azure, AWS, or GCP) • Excellent SQL skills for data querying, transformation, and analysis • Excellent communication and collaboration skills in English and German (min. B2 levels) • Ability to work independently as … work hands-on with the Databricks platform, supporting clients in solving complex data challenges. • Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python • Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based More ❯
independently • Leading a team of Data Engineers and deliver solutions as a team Key skills/knowledge/experience: • Proficient in PySpark, Python, SQL with atleast 5 years of experience • Working experience in Palantir Foundry platform is must • Experience designing and implementing data analytics solutions on enterprise data … platforms and distributed computing (Spark/Hive/Hadoop preferred). • Proven track record of understanding and transforming customer requirements into a best-fit design and architecture. • Demonstrated experience in end-to-end data management, data modelling, and data transformation for analytical use cases. • Proficient in SQL (SparkSQL preferred). • Experience with JavaScript/HTML/CSS a plus. Experience working in a Cloud environment such as Azure or AWS is a plus. • Experience with Scrum/Agile development methodologies. • At least 7 years of experience working with large scale More ❯
Skipton, England, United Kingdom Hybrid / WFH Options
Skipton
Databricks, Data Factory, Storage, Key Vault Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (SparkSQL) & Python (PySpark) Certifications: Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) You will More ❯
CI/CD tools Key Technology: Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (SparkSQL) & Python (PySpark) Certifications: You will need to be you. Curious about technology and adaptable to new technologies Agile More ❯
Understanding of Agile methodologies, CI/CD tools, and full software development lifecycle. Proficiency with Azure Databricks, Data Factory, Storage, Key Vault, Git, SQL (SparkSQL), and Python (PySpark). Certifications: Azure Fundamentals (AZ-900), Azure Data Fundamentals (DP-900). Curiosity about technology More ❯
London, England, United Kingdom Hybrid / WFH Options
Skipton Building Society
experience Experience with CI/CD tools Key Technologies: Azure Databricks, Data Factory, Storage, Key Vault Source control experience (e.g., Git) Proficiency in SQL (SparkSQL) and Python (PySpark) Certifications: Microsoft Certified: Azure Fundamentals (AZ-900), Azure Data Fundamentals (DP-900) Curiosity about technology More ❯
across teams. Key Technologies (awareness of) Azure Databricks, Data Factory, Storage, Key Vault Source control systems, such as Git dbt (Data Build Tool), SQL (SparkSQL), Python (PySpark) Certifications (Ideal) SAFe POPM or Scrum PSP Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure More ❯
ingestion from various data sources, performing complex transformations, and publishing data to Azure Data Lake or other storage services. Write efficient and standardized SparkSQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline. Automate pipeline orchestration using Databricks Workflows or … ingestion processes to handle structured, semi-structured, and unstructured data from various sources (APIs, databases, file systems). Implement data transformation logic using Spark, ensuring data is cleaned, transformed, and enriched according to business requirements. Leverage Databricks features such as Delta Lake to manage and track changes to … as encryption at rest and in transit, and role-based access control (RBAC) within Azure Databricks and Azure services. Performance Tuning & Optimization: Optimize Spark jobs for performance by tuning configurations, partitioning data, and caching intermediate results to minimize processing time and resource consumption. Continuously monitor and improve pipeline More ❯
experience, or equivalent demonstrated through one or a combination 7+ years of software engineering experience - Default Required 3+ years of experience working in Spark, Hadoop and Big Data experience 3+ years of experience working on SparkSQL, Streaming and data frame/dataset API … experience 3+ years of experience working on Spark query tuning and performance optimization Deep understanding of Hadoop/Cloud platforms, HDFS, ETL/ELT process and Unix shell scripting 3+ years of experience working with Relationship Database Management Systems (RDBMS) such as SQL Server, Oracle or … MySQL 3+ years of experience with SQL & NOSQL database integration with Spark (MS SQL server and MongoDB) 2+ years of Agile experience What You Will Be Doing Consult on or participate in moderately complex initiatives and deliverables within Software Engineering and contribute to large More ❯
Exeter, England, United Kingdom Hybrid / WFH Options
MBN Solutions
governance techniques Good understanding of Quality and Information Security principles Experience with Azure, ETL Tools such as ADF and Databricks Advanced Database and SQL skills, along with SQL, Python, Pyspark, SparkSQL Strong understanding of data model design and implementation principles Data More ❯
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
Requirements Minimum of 8 years of experience in data engineering At least 5 years of hands-on experience with Azure data services (ApacheSpark, Azure Data Factory, Synapse Analytics, RDBMS such … as SQL Server) Proven leadership and management experience in data engineering teams Proficiency in PySpark, Python (with Pandas), T-SQL, SparkSQL, and experience with CI/CD pipelines Strong understanding of data modeling, ETL processes, and data warehousing concepts Knowledge of version control systems like Git … deadlines Azure certifications such as Microsoft Certified: Azure Data Engineer Associate or Azure Solutions Architect Nice to Have Experience with Scala for ApacheSpark Knowledge of other cloud platforms like AWS or GCP Our Benefits Include Group pension plan, life assurance, income protection, and critical illness cover Private More ❯
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, SparkSQL) Experience with core components More ❯
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, SparkSQL) Experience with core components More ❯
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, SparkSQL) Experience with core components More ❯
The role This role sits within the Group Enterprise Systems (GES) Technology team. The ideal candidate is an experienced Microsoft data warehouse developer (SQL Server, SSIS, SSAS) capable of working independently and within a team to deliver enterprise-class data warehouse solutions and analytics platforms. The role involves … pipelines supporting BI and analytics use cases, ingesting, transforming, and loading data from multiple sources, structured and unstructured. Utilise Enterprise-grade technology, primarily SQL Server 2019 and potentially Azure technologies, and explore other solutions where appropriate. Develop patterns, best practices, and standardized data pipelines to ensure consistency across … the organisation. Essential Core Technical Experience 5 to 10+ years of experience in SQL Server data warehouse or data provisioning architectures. Advanced SQL query writing and stored procedure experience. Experience developing ETL solutions in SQL Server, including SSIS & T-SQL. Experience with Microsoft BI More ❯
data quality projects. You have an extended experience with Business Glossary, Data Catalog, Data Lineage or Reporting Governance You know all about the SQL You have experience in Power BI, including DAX, data modeling techniques, minimum Star Schemas (Kimball) and others are nice to have. You get even … You have a good Experience with master data management You are familiar with data quality tool like Azure Purview (Collibra, Informatica, Soda) Python, Spark, PySpark, SparkSQL Other Security protocols like CLS (column Level Security, Object Level Security) You are fluent in Dutch and More ❯
level. You must be SC cleared to be considered for the role. Tasks and Responsibilities: Engineering: Ingestion configuration. Write python/pyspark and sparkSQL code for validation/curation in notebook. Create data integration test cases. Implement or amend worker pipelines. Implement Data validation/… Web App. Good knowledge in real-time streaming applications, preferably with experience in Kafka real-time messaging or Azure Functions, Azure Service Bus. Spark processing and performance tuning. File formats partitioning (e.g., Parquet, JSON, XML, CSV). Azure DevOps/GitHub. Hands-on experience in at least one More ❯
via automated ML Ops. Ideally, you’ll also be technically skilled in most or all of the below: - Expert knowledge of Python and SQL, inc. the following libraries: Numpy, Pandas, PySpark and SparkSQL - Expert knowledge of ML Ops frameworks in the following categories More ❯
via automated ML Ops. Ideally, you’ll also be technically skilled in most or all of the below: - Expert knowledge of Python and SQL, inc. the following libraries: Numpy, Pandas, PySpark and SparkSQL - Expert knowledge of ML Ops frameworks in the following categories More ❯