or centralisation. Identify and cultivate relationships with key data creators, data owners and data consumers. Ensure data assets are properly defined and maintained within a central data catalogue. Data modelling to transform operational data into analytic/reporting structures such as Kimball style multi-dimensional models. Take ownership of data issues through to resolution, working with IT and …/dashboards that can be easily understood and used. Locate and define new data-related process improvement opportunities. Skills and Experience: Essential: • Experience managing/leading a team. • Data modelling, cleansing and enrichment, with experience in conceptual, logical, and physical data modelling. • Familiarity with data warehouses and analytical data structures. • Experience of data quality assurance, validation, and lineage. • Knowledge … Git or other source control software. • Knowledge of Orchestration Tools and processes (e.g SSIS, Data Factory, Alteryx) • Power BI Development including the data model, DAX, and visualizations. • Relational and Dimensional (Kimball) data modelling • Proficiency in SQL (T-SQL, PL/SQL, Databricks SQL) Desirable: • Databricks (or Alternative Modern Data Platform such as Snowflake) • Experience working in a regulated More ❯
Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensionalmodelling concepts. ETL/ELT & Data Pipelines: Proven track record of designing, building, and optimizing More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Osmii
Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensionalmodelling concepts. ETL/ELT & Data Pipelines: Proven track record of designing, building, and optimizing More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Osmii
Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensionalmodelling concepts. ETL/ELT & Data Pipelines: Proven track record of designing, building, and optimizing More ❯
them with robust data pipelines. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures More ❯
and performance tuning. Solid experience with Python , ideally using PySpark in Azure Databricks . Hands-on experience with Azure Data Lake Storage Gen2 . Understanding of data warehouse concepts , dimensionalmodelling , and data architecture . Experience working with Delta Lake and large-scale data processing. Experience building ETL pipelines in Azure Data Factory or similar orchestration tools. Familiarity More ❯
They closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions. RESPONSIBILITIES Data Pipeline Development: Design, implement, and maintain scalable data … in software engineering a plus. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures More ❯
Nottingham, England, United Kingdom Hybrid / WFH Options
Cloud2 Consult
experience working with Fabric-based solutions. Azure Ecosystem: At least 2 years of experience delivering business solutions using: Azure Data Factory/Synapse Power BI Azure SQL Server Data Modelling: Strong knowledge of dimensionalmodelling (Kimball). Familiarity with other techniques such as Data Vault 2.0. Development Experience: Minimum 3 years in database and/or analytics More ❯
City of London, London, United Kingdom Hybrid / WFH Options
83data
including scheduling, monitoring, and alerting. Collaborate with cross-functional teams (Product, Engineering, Data Science, Compliance) to define data requirements and build reliable data flows. Champion best practices in data modelling, governance, and DevOps for data engineering (CI/CD, IaC). Serve as a key communicator between technical teams and business stakeholders, translating complex data needs into actionable plans. … Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensionalmodelling, and metadata management. Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with version control, CI/CD , and Infrastructure-as-Code (Terraform or similar). More ❯
including scheduling, monitoring, and alerting. Collaborate with cross-functional teams (Product, Engineering, Data Science, Compliance) to define data requirements and build reliable data flows. Champion best practices in data modelling, governance, and DevOps for data engineering (CI/CD, IaC). Serve as a key communicator between technical teams and business stakeholders, translating complex data needs into actionable plans. … Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensionalmodelling, and metadata management. Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with version control, CI/CD , and Infrastructure-as-Code (Terraform or similar). More ❯
including scheduling, monitoring, and alerting. Collaborate with cross-functional teams (Product, Engineering, Data Science, Compliance) to define data requirements and build reliable data flows. Champion best practices in data modelling, governance, and DevOps for data engineering (CI/CD, IaC). Serve as a key communicator between technical teams and business stakeholders, translating complex data needs into actionable plans. … Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensionalmodelling, and metadata management. Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with version control, CI/CD , and Infrastructure-as-Code (Terraform or similar). More ❯
including scheduling, monitoring, and alerting. Collaborate with cross-functional teams (Product, Engineering, Data Science, Compliance) to define data requirements and build reliable data flows. Champion best practices in data modelling, governance, and DevOps for data engineering (CI/CD, IaC). Serve as a key communicator between technical teams and business stakeholders, translating complex data needs into actionable plans. … Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensionalmodelling, and metadata management. Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with version control, CI/CD , and Infrastructure-as-Code (Terraform or similar). More ❯
South East London, England, United Kingdom Hybrid / WFH Options
83data
including scheduling, monitoring, and alerting. Collaborate with cross-functional teams (Product, Engineering, Data Science, Compliance) to define data requirements and build reliable data flows. Champion best practices in data modelling, governance, and DevOps for data engineering (CI/CD, IaC). Serve as a key communicator between technical teams and business stakeholders, translating complex data needs into actionable plans. … Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensionalmodelling, and metadata management. Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with version control, CI/CD , and Infrastructure-as-Code (Terraform or similar). More ❯
depth knowledge of Snowflake architecture, features, and best practices. · Experience with CI/CD pipelines using Git and Git Actions. · Knowledge of various data modeling techniques, including Star Schema, Dimensional models, and Data Vault. · Hands-on experience with: · Developing data pipelines (Snowflake), writing complex SQL queries. · Building ETL/ELT/data pipelines. · Kubernetes and Linux containers (e.g., Docker More ❯
depth knowledge of Snowflake architecture, features, and best practices. · Experience with CI/CD pipelines using Git and Git Actions. · Knowledge of various data modeling techniques, including Star Schema, Dimensional models, and Data Vault. · Hands-on experience with: · Developing data pipelines (Snowflake), writing complex SQL queries. · Building ETL/ELT/data pipelines. · Kubernetes and Linux containers (e.g., Docker More ❯
depth knowledge of Snowflake architecture, features, and best practices. · Experience with CI/CD pipelines using Git and Git Actions. · Knowledge of various data modeling techniques, including Star Schema, Dimensional models, and Data Vault. · Hands-on experience with: · Developing data pipelines (Snowflake), writing complex SQL queries. · Building ETL/ELT/data pipelines. · Kubernetes and Linux containers (e.g., Docker More ❯
Crawley, England, United Kingdom Hybrid / WFH Options
McCabe & Barton
design and deliver scalable data products that enable self-service analytics using Snowflake, Power BI, Python, and SQL. What You’ll Do: Build robust ETL/ELT pipelines and dimensional models for analytics Enforce data quality, security, and governance in a Data Mesh setup Enable self-service insights through intuitive data models and training Own data products end-to … continuous improvement Promote innovation and best practices across the organisation What We’re Looking For: Strong skills in SQL and Power BI. Python is beneficial Solid experience with data modelling and ETL pipelines Knowledge of cloud environments – Azure Experience with Snowflake and Databricks Familiarity with data governance in decentralised environments Excellent communication and stakeholder engagement skills A proactive, problem More ❯
and Dimensions with accurate end-user reports Proficiency with reporting tools such as Oracle OAS and Microsoft Power BI Deep understanding of Data Warehouse design, including Star schema and dimensionalmodelling Strong analytical skills and technical aptitude, with the ability to influence system architecture decisions Experience leading testing disciplines within agile projects Self-starter with initiative and enthusiasm More ❯
and Dimensions with accurate end-user reports Proficiency with reporting tools such as Oracle OAS and Microsoft Power BI Deep understanding of Data Warehouse design, including Star schema and dimensionalmodelling Strong analytical skills and technical aptitude, with the ability to influence system architecture decisions Experience leading testing disciplines within agile projects Self-starter with initiative and enthusiasm More ❯
and emerging technologies. What You’ll Bring ✅ Extensive hands-on experience with Databricks , and Microsoft Azure data tools (must-have: Azure Data Factory, Azure Synapse, or Azure SQL). ✅ Dimensionalmodelling expertise for analytics use cases. ✅ Strong ETL/ELT development skills. ✅ Python scripting experience for data automation. ✅ Experience with CI/CD methodologies for data platforms. ✅ Knowledge More ❯
City of London, London, United Kingdom Hybrid / WFH Options
KDR Talent Solutions
and emerging technologies. What You’ll Bring ✅ Extensive hands-on experience with Databricks , and Microsoft Azure data tools (must-have: Azure Data Factory, Azure Synapse, or Azure SQL). ✅ Dimensionalmodelling expertise for analytics use cases. ✅ Strong ETL/ELT development skills. ✅ Python scripting experience for data automation. ✅ Experience with CI/CD methodologies for data platforms. ✅ Knowledge More ❯
South East London, England, United Kingdom Hybrid / WFH Options
KDR Talent Solutions
and emerging technologies. What You’ll Bring ✅ Extensive hands-on experience with Databricks , and Microsoft Azure data tools (must-have: Azure Data Factory, Azure Synapse, or Azure SQL). ✅ Dimensionalmodelling expertise for analytics use cases. ✅ Strong ETL/ELT development skills. ✅ Python scripting experience for data automation. ✅ Experience with CI/CD methodologies for data platforms. ✅ Knowledge More ❯
SQL Server) Qualifications Desired qualifications and experience: Experience with Programming languages such as (Python, SQL etc.) Experience building and maintaining data pipelines. Familiarity with medallion architecture. An understanding of dimensionalmodelling concepts. Proficiency with reporting tools, preferably Power BI. Practical experience working in an Agile Scrum environment. At Solus, we offer a collaborative and innovative work environment where More ❯
About the Platform: This greenfield initiative is focused on building a next-gen data ecosystem with a tech stack including: Snowflake for cloud data warehousing dbt for transformation and modelling Azure for cloud infrastructure and orchestration Fivetran for automated data ingestion Power BI and other modern BI tools for reporting and visualisation 🧠 What You’ll Do: Design and implement … BI developers to deliver insightful, high-performance dashboards Work with Data Engineers to optimise data ingestion and orchestration pipelines using Azure Data Factory and Fivetran Apply best practices in dimensionalmodelling, layered architecture, and data quality ✅ What We’re Looking For: Strong experience in data modelling and SQL Hands-on experience with dbt and cloud data platforms … like Snowflake or Azure Synapse Analytics Solid understanding of modern data stack principles , including layered modelling and data warehouse design Excellent communication skills and the ability to work with stakeholders across technical and non-technical teams Nice to have: Experience with Power BI or similar BI tools Familiarity with CI/CD practices in data environments Exposure to data More ❯
business requirements and high-level designs. Ensure alignment of low-level designs with application architecture, high-level designs, and AA Standards, Frameworks, and Policies. Analyse data sets to identify modelling logic and key attributes required for low-level design, and create and maintain appropriate documentation. Develop and update Physical Data Models (PDMS) and participate in design reviews. Lead handover … What do I need? Experienced with data warehouse and business intelligence, including delivering low-level ETL design and physical data models. Proficient in Data Warehousing Design Methodologies (e.g., Kimball dimensional models) and Data Modelling tools (e.g., ER Studio). Strong Data Analysis skills and hands-on experience with SQL/Python for data interrogation. Working knowledge of Cloud More ❯