grow our collective data engineering capability. What we're looking for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, Apache Spark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong More ❯
experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions. Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive More ❯
experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions. Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive More ❯
experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions. Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive More ❯
experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions. Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive More ❯
experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions. Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive More ❯
. Desired: Experience with WMS/WFS services, graph theory (NetworkX), GDAL, and Snowflake. Nice to have: CI/CD and orchestration tools (Airflow, Argo CD), Mapbox/MapLibre, Scala, Streamlit, DuckDB, and Power BI. What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your More ❯
pipelines using tools like Apache Airflow, DBT, AWS Glue, Azure Data Factory, or Kafka Connect. Desirable Optimize data transformations for performance, reusability, and modular design (e.g., using SQL/Scala/Python). Disclosure and Barring Service Check This post is subject to the Rehabilitation of Offenders Act (Exceptions Order) 1975 and as such it will be necessary for a More ❯
ongoing platform modernisation. Ensure resilience, reliability and data quality across high-impact operational systems. What We're Looking For Strong software engineering skills in Python or C# (Python/Scala useful for data-heavy components). Solid understanding of data modelling, relational databases (SQL Server, Oracle, PostgreSQL) and building performant queries/schemas. Experience with post-trade, middle-office, or More ❯
background in Azure data services, including: Azure Synapse Azure Data Factory (ADF) Databricks Proven ability to work with streaming data sources and real-time telemetry Ideally some experience with Scala Ability to work with large-scale, fast-moving datasets in production environments Consultancy background with experience delivering complex data solutions for enterprise clients Principal-Level Responsibilities As a Principal Data More ❯
background in Azure data services, including: Azure Synapse Azure Data Factory (ADF) Databricks Proven ability to work with streaming data sources and real-time telemetry Ideally some experience with Scala Ability to work with large-scale, fast-moving datasets in production environments Consultancy background with experience delivering complex data solutions for enterprise clients Principal-Level Responsibilities As a Principal Data More ❯
Milton Keynes, Buckinghamshire, UK Hybrid/Remote Options
ScaleOps Search Ltd
background in Azure data services, including: Azure Synapse Azure Data Factory (ADF) Databricks Proven ability to work with streaming data sources and real-time telemetry Ideally some experience with Scala Ability to work with large-scale, fast-moving datasets in production environments Consultancy background with experience delivering complex data solutions for enterprise clients Principal-Level Responsibilities As a Principal Data More ❯
Custom Control Systems (AMX/Crestron/Extron/SY) Audio and Video Conferencing (Cisco/Lifesize/Polycom/Zoom/Microsoft Teams) Microsoft Hub Digital Signage Systems (Scala/OneLAN/Brightsign) Video Wall Display Systems (Datapath/Dexon) IPTV (Exterity/Tripleplay) Live Event Work (VC/Presentation/Broadcast) Configuring, performing diagnostics, and firmware updates of More ❯
SL6, Maidenhead, Royal Borough of Windsor and Maidenhead, Berkshire, United Kingdom
Unified Support
Custom Control Systems (AMX/Crestron/Extron/SY) Audio and Video Conferencing (Cisco/Lifesize/Polycom/Zoom/Microsoft Teams) Microsoft Hub Digital Signage Systems (Scala/OneLAN/Brightsign) Video Wall Display Systems (Datapath/Dexon) IPTV (Exterity/Tripleplay) Live Event Work (VC/Presentation/Broadcast) Configuring, performing diagnostics, and firmware updates of More ❯