Financial Services Location: Hybrid (3 days/week from Sheffield) We are looking for a Senior Data Engineer with hands-on expertise in SQL/BigQuery migration , Azure Databricks , and SparkSQL , who also brings team leadership experience and thrives in Agile/SAFe … Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex BigQuery SQL transformations to SparkSQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements: Strong SQL skills (BigQuery SQL & SparkSQL), Python, and ETL pipeline development. Experience with Azure and cloud data tools. Familiarity More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
Graduate Data Engineer (Python SparkSQL) *Newcastle Onsite* to £33k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a … by minimum A A A grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, ApacheSpark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … a range of events and early finish for drinks on Fridays Apply now to find out more about this Graduate Data Engineer (Python SparkSQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
sunderland, tyne and wear, north east england, united kingdom Hybrid / WFH Options
Client Server
Graduate Data Engineer (Python SparkSQL) *Newcastle Onsite* to £33k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a … by minimum A A A grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, ApacheSpark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … a range of events and early finish for drinks on Fridays Apply now to find out more about this Graduate Data Engineer (Python SparkSQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Somerset Bridge
Hands-on experience in building ELT pipelines and working with large-scale datasets using Azure Data Factory (ADF) and Databricks. Strong proficiency in SQL (T-SQL, SparkSQL) for data extraction, transformation, and optimisation. Proficiency in Azure Databricks (PySpark, Delta Lake, SparkSQL) for big data processing. Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics. Experience working with Delta Lake for schema evolution, ACID transactions, and time travel in Databricks. Strong Python (PySpark) skills for big data processing and automation. Experience with … Scala (optional but preferred for advanced Spark applications). Experience working with Databricks Workflows & Jobs for data orchestration. Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference. Experience with data modelling techniques to support analytics and reporting. Familiarity with More ❯
Salary: 50.000 - 60.000 € per year Requirements: • 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark • Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, SparkSQL) • Experience with core components of the Databricks … ELT processes, data modeling and techniques, and database systems • Proven experience with at least one major cloud platform (Azure, AWS, or GCP) • Excellent SQL skills for data querying, transformation, and analysis • Excellent communication and collaboration skills in English and German (min. B2 levels) • Ability to work independently as … work hands-on with the Databricks platform, supporting clients in solving complex data challenges. • Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python • Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based More ❯
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, SparkSQL) Experience with core components More ❯
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, SparkSQL) Experience with core components More ❯
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, SparkSQL) Experience with core components More ❯
data quality projects. You have an extended experience with Business Glossary, Data Catalog, Data Lineage or Reporting Governance You know all about the SQL You have experience in Power BI, including DAX, data modeling techniques, minimum Star Schemas (Kimball) and others are nice to have. You get even … You have a good Experience with master data management You are familiar with data quality tool like Azure Purview (Collibra, Informatica, Soda) Python, Spark, PySpark, SparkSQL Other Security protocols like CLS (column Level Security, Object Level Security) You are fluent in Dutch and More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
Analytics, Azure Storage, and Azure ML. Proficiency in managing and optimizing cloud resources for performance and cost-efficiency. Extensive experience with Databricks, including SparkSQL, Delta Lake, and Databricks clusters. Experience in deploying, monitoring, and optimizing machine learning models in a production environment. Proficiency in programming More ❯
Your qualifications and experience You are a pro at using SQL for data manipulation (at least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL) Modelling & Statistical Analysis experience, ideally customer related Coding skills in at least one of Python, R, Scala, C, Java or JS Track record of using … in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing More ❯
London, England, United Kingdom Hybrid / WFH Options
Highnic
Join to apply for the GCP Data Engineer (Java, Spark, ETL) role at Good Chemical Science & Technology Co. Ltd. Responsibilities Develop, implement, and optimize Real Time data processing workflows using Google Cloud Platform services such as Dataflow, Pub/Sub, and BigQuery Streaming. Design and develop ETL processes … Cloud Platform Cloud Composer. Possibly obtain and leverage Google Cloud Platform certifications. Qualifications Proficiency in programming languages such as Python and Java. Experience with SparkSQL, GCP BigQuery, and real-time data processing. Understanding of event-driven architectures. Familiarity with Unix/Linux platforms. Deep understanding of real-time data processing More ❯