of critical datasets. Key Requirements: Strong experience in Power BI, SQL, and Python is essential. Experience with Azure and Microsoft Fabric. Knowledge of BI engineering concepts, including data modelling, ETL, pipelines, and backend data management. Experience with legacy BI systems (SSIS, SSAS cubes) and troubleshooting processes. Strong problem-solving skills and attention to detail, with the ability to manage multiple More ❯
Banbury, Oxfordshire, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
into Features, Epics, and Stories and setting engineering standards. Collaborate with the Data Architect to design and implement data architecture and build plans. Build and maintain scalable data pipelines, ETL/ELT processes, and large-scale data workflows. Optimise data systems for performance, reliability, and scalability. Implement data quality processes and maintain data models, schemas, and documentation. Operate CI/ More ❯
Hook Norton, Oxfordshire, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
into Features, Epics, and Stories and setting engineering standards. Collaborate with the Data Architect to design and implement data architecture and build plans. Build and maintain scalable data pipelines, ETL/ELT processes, and large-scale data workflows. Optimise data systems for performance, reliability, and scalability. Implement data quality processes and maintain data models, schemas, and documentation. Operate CI/ More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
design, build, and optimise scalable data pipelines and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure More ❯
design, build, and optimise scalable data pipelines and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure More ❯
Basingstoke, Hampshire, South East, United Kingdom Hybrid/Remote Options
Spectrum It Recruitment Limited
external MSPs and vendors to ensure best-practice delivery Technology & Experience Essential: Strong hands-on experience with the Azure data platform Proven experience delivering API and enterprise system integrations ETL/ELT pipelines, data modelling, and data warehousing Understanding of how data feeds into Power BI and analytics platforms Desirable: GCP exposure (to support migration) iPaaS platforms such as Boomi More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid/Remote Options
Reed Technology
staff. Required Skills & Qualifications: Experience designing cloud data platforms in Azure/AWS or significant on-premise design experience. 5+ years in data engineering or business intelligence roles. Extensive ETLand data pipeline design experience, technology agnostic. Proficiency in SQL and experience with data engineering coding languages such as Python, R, or Spark. Understanding of data warehouse and data lake More ❯
model drifts, data-quality alerts, scheduled r-training pipelines. Data Management and Preprocessing: Collect, clean and preprocess large datasets to facilitate analysis and model training. Implement data pipelines andETL processes to ensure data availability and quality. Software Development: Write clean, efficient and scalable code in Python. Utilize CI/CD practices for version control, testing and code review. Work More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Proactive Appointments
model drifts, data-quality alerts, scheduled r-training pipelines. Data Management and Preprocessing: Collect, clean and preprocess large datasets to facilitate analysis and model training. Implement data pipelines andETL processes to ensure data availability and quality. Software Development: Write clean, efficient and scalable code in Python. Utilize CI/CD practices for version control, testing and code review. Work More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Robert Half
and external sources, ensuring data quality, governance, and integrity.* Implement efficient data models and schemas within Snowflake, and use DBT for transformation, orchestration, and workflow management.* Optimise ELT/ETL processes for improved performance, cost efficiency, and scalability.* Troubleshoot and resolve data pipeline issues swiftly and effectively across the data platform.* Work with orchestration tools such as Airflow, ADF, or More ❯
experience with Snowflake. 2+ years production experience with dbt (mandatory). Advanced SQL and strong Python programming skills. Experience with Git, CI/CD, and DevOps practices. Familiarity with ETL/ELT tools and cloud platforms (AWS, Azure). Knowledge of Snowflake features such as Snowpipe, streams, tasks, and query optimisation. Preferred Qualifications Snowflake certifications (SnowPro Core or Advanced). More ❯
Java exposure beneficial Delta Lake/Delta table optimisation experience Git/GitLab, CI/CD pipelines, DevOps practices Strong troubleshooting and problem-solving ability Experience with lakehouse architectures, ETL workflows, and distributed computing Familiarity with time-series, market data, transactional data or risk metrics Nice to Have Power BI dataset preparation OneLake, Azure Data Lake, Kubernetes, Docker Knowledge of More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
experience with Snowflake. 2+ years production experience with dbt (mandatory). Advanced SQL and strong Python programming skills. Experience with Git, CI/CD, and DevOps practices. Familiarity with ETL/ELT tools and cloud platforms (AWS, Azure). Knowledge of Snowflake features such as Snowpipe, streams, tasks, and query optimisation. Preferred Qualifications Snowflake certifications (SnowPro Core or Advanced). More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯