programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Michael Page
innovative solutions and maintaining a strong reputation for excellence in analytics and data-driven decision-making. Description Senior Data Engineer Develop and maintain robust and scalable data pipelines andETL processes. Optimise data workflows and ensure efficient data storage solutions. Collaborate with analytics and engineering teams to meet business objectives. Ensure data integrity and implement best practices for data governance. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Michael Page Technology
innovative solutions and maintaining a strong reputation for excellence in analytics and data-driven decision-making. Description Senior Data Engineer Develop and maintain robust and scalable data pipelines andETL processes. Optimise data workflows and ensure efficient data storage solutions. Collaborate with analytics and engineering teams to meet business objectives. Ensure data integrity and implement best practices for data governance. More ❯
/recovery, and maintenance planning. Ability to independently gather and analyse requirements and translate them into technical database solutions. Solid understanding of relational database concepts and normalisation. Experience with ETL processes, data integration, and reporting tools (e.g., SSIS, SSRS, Power BI). Excellent problem-solving skills and attention to detail. Strong communication and interpersonal skills; ability to work effectively with More ❯
Burton-On-Trent, Staffordshire, West Midlands, United Kingdom Hybrid/Remote Options
Crimson
in Azure Data Pipeline development is key for this position. Key Skills & Responsibilities: Build and manage pipelines using Azure Data Factory, Databricks, CI/CD, and Terraform. Optimisation of ETL processes for performance and cost-efficiency. Design scalable data models aligned with business needs. Azure data solutions for efficient data storage and retrieval. Ensure compliance with data protection laws (e.g. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
recruitment22
data modellers, and reporting teams to ensure the curated data supports deeper insights into corporate performance Optimise data pipelines for scalability, reliability, and maintainability using best practices (e.g., modular ETL design, version control, CI/CD) Strong understanding of Microsoft Fabric architecture and components Expertise in Microsoft Fabric Data Engineering Fabric Dataflows/Azure Data Factory Experience with Azure Synapse More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Asset Resourcing Limited
Extensive experience in designing cloud data platforms using Azure, AWS, or exceptional on-premise design expertise. At least 5 years in data engineering or business intelligence roles. Proficiency in ETLand data pipeline design, with a technology-agnostic approach. A solid understanding of data warehouse and data lake principles. Expert SQL skills and demonstrable data modelling capabilities. About the Company More ❯
background in hands-on development of platforms/dashboards inc SQL, ADF, PowerBI etc Management and leadership of multi-disciplinary teams within data – Data Science, Data Analysis, Engineering/ETL, Data Visualisation Experience of developing and, critically, delivering AI/NLP/ML and predictive analytics capability within commercial environments. Experience of ensuring that all data-related activities comply with More ❯
Bournemouth, Dorset, South West, United Kingdom Hybrid/Remote Options
Sanderson Recruitment
Solid knowledge of Delta Lake and Lakehouse principles. Hands-on experience with SQL for data transformation. Familiarity with Azure services (ADLS/Blob, Key Vault, SQL). Knowledge of ETL/ELT frameworks and monitoring. Desirable: Understanding of Microsoft Fabric . Experience with DevOps practices (CI/CD). Awareness of data governance and lineage tools . Experience in Financial More ❯
and lakehouses using Databricks, with additional exposure to Snowflake, Azure Synapse, or Fabric being a bonus. Data Modelling – Build and optimise enterprise-grade data models across varied data layers. ETL/ELT Engineering – Use tooling such as Databricks, SSIS, ADF, Informatica, IBM DataStage to drive efficient data ingestion and transformation. Data Governance – Implement governance and MDM using tools like Unity More ❯
Central London, London, United Kingdom Hybrid/Remote Options
McCabe & Barton
architecture, and analytics environments are reliable, performant, and secure. Key Responsibilities Platform Development & Maintenance Design and implement data pipelines using Azure Data Factory , Databricks , and related Azure services. Build ETL/ELT processes to transform raw data into structured, analytics-ready formats. Optimise pipeline performance and ensure high availability of data services. Infrastructure & Architecture Architect and deploy scalable data lake More ❯
Familiarity with GitLab, unit testing, and CI/CD pipelines Strong troubleshooting ability and experience working in Agile environments Excellent communication skills with stakeholder-facing experience Practical experience building ETL workflows, lakehouse architectures, dataflows, and semantic models Exposure to time-series data, financial market feeds, transactional records, and risk-related datasets More ❯
. Significant experience with Qlik Sense and Power BI for dashboards and visualisations. Proven ability to develop insights for diverse audiences, including senior leadership. Strong understanding of data warehousing, ETL processes , and cloud platforms (AWS & Azure). Excellent communication skills to explain technical concepts to non-technical stakeholders. Interested? Apply now to join a team that's unlocking the true More ❯
this role, you will be responsible for: Building and managing data pipelines using Azure Synapse, Data Factory, Databricks, or Microsoft Fabric Designing and maintaining data lakes, data warehouses, andETL/ELT processes Developing scalable data models for reporting in Power BI Work closely with stakeholders to understand the needs of their individual business and designing tailored solutions to meet More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
this role, you will be responsible for: Building and managing data pipelines using Azure Synapse, Data Factory, Databricks, or Microsoft Fabric Designing and maintaining data lakes, data warehouses, andETL/ELT processes Developing scalable data models for reporting in Power BI Work closely with stakeholders to understand the needs of their individual business and designing tailored solutions to meet More ❯
Wellington, Shropshire, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
Skills & Experience Strong expertise in SAS 9.4 (DI) and SAS Viya 3.x (SAS Studio, VA, VI). Familiarity with Platform LSF , Jira , and GIT . Hands-on experience with ETL tools: Pentaho , Talend . Data virtualization experience with Denodo . Proficiency in SQL and data modeling. Knowledge of Oracle (nice to have). Solid understanding of Agile/Scrum frameworks. More ❯