role: This role sits within the Group Enterprise Systems (GES) Technology team. The right candidate would be an experienced Microsoft data warehouse developer (SQL Server, SSIS, SSAS) who can work both independently and as a member of a team to deliver enterprise-class data warehouse solutions and analytics … data models. Build and maintain automated pipelines to support data solutions across BI and analytics use cases. Work with Enterprise-grade technology, primarily SQL Server 2019 and potentially Azure technologies. Build patterns, common ways of working, and standardized data pipelines for DLG to ensure consistency across the organization. … Essential Core Technical Experience: 5 to 10+ years' extensive experience in SQL Server data warehouse or data provisioning architectures. Advanced SQL query writing & SQL procedure experience. Experience developing ETL solutions in SQL Server including SSIS & T-SQL. Experience in Microsoft BI technologies More ❯
Central London, London, United Kingdom Hybrid / WFH Options
167 Solutions Ltd
engineers to develop scalable solutions that enhance data accessibility and efficiency across the organisation. Key Responsibilities Design, build, and maintain data pipelines using SQL, Python, and Spark . Develop and manage data warehouse and lakehouse solutions for analytics, reporting, and machine learning. Implement ETL/ELT … 6+ years of experience in data engineering within large-scale digital environments. Strong programming skills in Python, SQL, and Spark (SparkSQL) . Expertise in Snowflake and modern data architectures. Experience designing and managing data pipelines, ETL, and ELT workflows . Knowledge of AWS services such as More ❯
technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, SparkSQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks More ❯
london, south east england, united kingdom Hybrid / WFH Options
La Fosse
technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, SparkSQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks More ❯
availability and accessibility. Experience & Skills : Strong experience in data engineering. At least some commercial hands-on experience with Azure data services (e.g., ApacheSpark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as PySpark … Python (with Pandas if no PySpark), T-SQL, and SparkSQL. Strong understanding of data modeling, ETL processes, and data warehousing concepts. Knowledge of CI/CD pipelines and version control (e.g., Git). Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to manage multiple More ❯
london, south east england, united kingdom Hybrid / WFH Options
DATAHEAD
availability and accessibility. Experience & Skills : Strong experience in data engineering. At least some commercial hands-on experience with Azure data services (e.g., ApacheSpark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as PySpark … Python (with Pandas if no PySpark), T-SQL, and SparkSQL. Strong understanding of data modeling, ETL processes, and data warehousing concepts. Knowledge of CI/CD pipelines and version control (e.g., Git). Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to manage multiple More ❯
Your qualifications and experience You are a pro at using SQL for data manipulation (at least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL) Modelling & Statistical Analysis experience, ideally customer related Coding skills in at least one of Python, R, Scala, C, Java or JS Track record of using … in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing More ❯
Your qualifications and experience You are a pro at using SQL for data manipulation (at least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL) Modelling & Statistical Analysis experience, ideally customer related Coding skills in at least one of Python, R, Scala, C, Java or JS Track record of using … in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing More ❯
Your qualifications and experience You are a pro at using SQL for data manipulation (at least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL). Modelling & Statistical Analysis experience, ideally customer related. Coding skills in at least one of Python, R, Scala, C, Java or JS. Track record of … in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing More ❯