or centralisation. Identify and cultivate relationships with key data creators, data owners and data consumers. Ensure data assets are properly defined and maintained within a central data catalogue. Data modelling to transform operational data into analytic/reporting structures such as Kimball style multi-dimensional models. Take ownership of data issues through to resolution, working with IT and …/dashboards that can be easily understood and used. Locate and define new data-related process improvement opportunities. Skills and Experience: Essential: • Experience managing/leading a team. • Data modelling, cleansing and enrichment, with experience in conceptual, logical, and physical data modelling. • Familiarity with data warehouses and analytical data structures. • Experience of data quality assurance, validation, and lineage. • Knowledge … Git or other source control software. • Knowledge of Orchestration Tools and processes (e.g SSIS, Data Factory, Alteryx) • Power BI Development including the data model, DAX, and visualizations. • Relational and Dimensional (Kimball) data modelling • Proficiency in SQL (T-SQL, PL/SQL, Databricks SQL) Desirable: • Databricks (or Alternative Modern Data Platform such as Snowflake) • Experience working in a regulated More ❯
Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensionalmodelling concepts. ETL/ELT & Data Pipelines: Proven track record of designing, building, and optimizing More ❯
them with robust data pipelines. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures More ❯
They closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions. RESPONSIBILITIES Data Pipeline Development: Design, implement, and maintain scalable data … in software engineering a plus. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures More ❯
including scheduling, monitoring, and alerting. Collaborate with cross-functional teams (Product, Engineering, Data Science, Compliance) to define data requirements and build reliable data flows. Champion best practices in data modelling, governance, and DevOps for data engineering (CI/CD, IaC). Serve as a key communicator between technical teams and business stakeholders, translating complex data needs into actionable plans. … Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensionalmodelling, and metadata management. Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with version control, CI/CD , and Infrastructure-as-Code (Terraform or similar). More ❯
City of London, London, United Kingdom Hybrid / WFH Options
83data
including scheduling, monitoring, and alerting. Collaborate with cross-functional teams (Product, Engineering, Data Science, Compliance) to define data requirements and build reliable data flows. Champion best practices in data modelling, governance, and DevOps for data engineering (CI/CD, IaC). Serve as a key communicator between technical teams and business stakeholders, translating complex data needs into actionable plans. … Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensionalmodelling, and metadata management. Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with version control, CI/CD , and Infrastructure-as-Code (Terraform or similar). More ❯
including scheduling, monitoring, and alerting. Collaborate with cross-functional teams (Product, Engineering, Data Science, Compliance) to define data requirements and build reliable data flows. Champion best practices in data modelling, governance, and DevOps for data engineering (CI/CD, IaC). Serve as a key communicator between technical teams and business stakeholders, translating complex data needs into actionable plans. … Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensionalmodelling, and metadata management. Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with version control, CI/CD , and Infrastructure-as-Code (Terraform or similar). More ❯
depth knowledge of Snowflake architecture, features, and best practices. · Experience with CI/CD pipelines using Git and Git Actions. · Knowledge of various data modeling techniques, including Star Schema, Dimensional models, and Data Vault. · Hands-on experience with: · Developing data pipelines (Snowflake), writing complex SQL queries. · Building ETL/ELT/data pipelines. · Kubernetes and Linux containers (e.g., Docker More ❯
depth knowledge of Snowflake architecture, features, and best practices. · Experience with CI/CD pipelines using Git and Git Actions. · Knowledge of various data modeling techniques, including Star Schema, Dimensional models, and Data Vault. · Hands-on experience with: · Developing data pipelines (Snowflake), writing complex SQL queries. · Building ETL/ELT/data pipelines. · Kubernetes and Linux containers (e.g., Docker More ❯
depth knowledge of Snowflake architecture, features, and best practices. · Experience with CI/CD pipelines using Git and Git Actions. · Knowledge of various data modeling techniques, including Star Schema, Dimensional models, and Data Vault. · Hands-on experience with: · Developing data pipelines (Snowflake), writing complex SQL queries. · Building ETL/ELT/data pipelines. · Kubernetes and Linux containers (e.g., Docker More ❯
City of London, London, United Kingdom Hybrid / WFH Options
KDR Talent Solutions
and emerging technologies. What You’ll Bring ✅ Extensive hands-on experience with Databricks , and Microsoft Azure data tools (must-have: Azure Data Factory, Azure Synapse, or Azure SQL). ✅ Dimensionalmodelling expertise for analytics use cases. ✅ Strong ETL/ELT development skills. ✅ Python scripting experience for data automation. ✅ Experience with CI/CD methodologies for data platforms. ✅ Knowledge More ❯
and emerging technologies. What You’ll Bring ✅ Extensive hands-on experience with Databricks , and Microsoft Azure data tools (must-have: Azure Data Factory, Azure Synapse, or Azure SQL). ✅ Dimensionalmodelling expertise for analytics use cases. ✅ Strong ETL/ELT development skills. ✅ Python scripting experience for data automation. ✅ Experience with CI/CD methodologies for data platforms. ✅ Knowledge More ❯
South East London, England, United Kingdom Hybrid / WFH Options
KDR Talent Solutions
and emerging technologies. What You’ll Bring ✅ Extensive hands-on experience with Databricks , and Microsoft Azure data tools (must-have: Azure Data Factory, Azure Synapse, or Azure SQL). ✅ Dimensionalmodelling expertise for analytics use cases. ✅ Strong ETL/ELT development skills. ✅ Python scripting experience for data automation. ✅ Experience with CI/CD methodologies for data platforms. ✅ Knowledge More ❯
Data Lakehouse Medallion architecture Microsoft Azure T-SQL Development (MS SQL Server 2005 onwards) Python, PySpark Experience of the following systems would also be advantageous: Azure DevOps MDS Kimball DimensionalModelling Methodology Power Bi Unity Catalogue Microsoft Fabric Experience of the following business areas would be advantageous: Insurance sector (Lloyds Syndicate, Underwriting, Broking) Qualifications: Degree educated in relevant More ❯
with MS SQL Server , T-SQL , and performance tuning for reporting workloads. An understanding of SSRS and SSIS for traditional reporting and ETL processes. Data Warehousing Concepts : Understanding of dimensional modeling, fact and dimension tables. Solid understanding of data visualisation principles and dashboard design best practices. Familiarity with Azure DevOps version control for Power BI and SQL development. Performance More ❯
business requirements and high-level designs. Ensure alignment of low-level designs with application architecture, high-level designs, and AA Standards, Frameworks, and Policies. Analyse data sets to identify modelling logic and key attributes required for low-level design, and create and maintain appropriate documentation. Develop and update Physical Data Models (PDMS) and participate in design reviews. Lead handover … What do I need? Experienced with data warehouse and business intelligence, including delivering low-level ETL design and physical data models. Proficient in Data Warehousing Design Methodologies (e.g., Kimball dimensional models) and Data Modelling tools (e.g., ER Studio). Strong Data Analysis skills and hands-on experience with SQL/Python for data interrogation. Working knowledge of Cloud More ❯
City Of London, England, United Kingdom Hybrid / WFH Options
Fruition Group
Databricks, Azure SQL, and Data Factory. Deep technical knowledge of SQL Server including stored procedures and complex data transformation logic. Proven experience in designing and delivering data warehousing and dimensionalmodelling solutions. Excellent collaboration skills with a track record of working in Agile teams. Experience with Azure DevOps, GIT, and CI/CD pipelines. Comfortable liaising directly with More ❯
solutions to both technical and non-technical audiences, tailoring communication style based on the audience. Data Modeling and Warehousing: •Design and implement data models optimized for analytical workloads, using dimensional modeling techniques (e.g., star schema, snowflake schema). •Participate in the design, implementation, and maintenance of data warehouses ensuring data integrity, performance, and scalability. BASIC QUALIFICATIONS •Educational Background: Bachelor … optimization. •Programming/Statistical Analysis Skills: Working knowledge of R or Python for analytics, data manipulation, and algorithm development. •Data Warehousing Knowledge: In-depth knowledge of data warehousing principles, dimensional modeling techniques (e.g., star schema, snowflake schema), and data quality management. •Communication and Collaboration Abilities: Excellent verbal and written communication skills, with the ability to effectively communicate technical concepts More ❯
Experience of designing and developing systems using microservices architectural patterns. DevOps experience in implementing development, testing, release, and deployment processes using DevOps processes. Knowledge in data modeling (3NF/Dimensional modeling/Data Vault2). Work experience in agile delivery. Able to provide comprehensive documentation. Able to set and manage realistic expectations for timescales, costs, benefits, and measures for More ❯
City Of London, England, United Kingdom Hybrid / WFH Options
Pioneer Search
Microsoft Azure ecosystem Exposure to or interest in: Microsoft Fabric and its evolving role in enterprise data platforms Azure DevOps for CI/CD and deployment T-SQL and dimensionalmodelling (Kimball methodology) Experience in Financial Services or Lloyd's market is a plus Apply now or get in touch to find out more - alexh@pioneer-search.com More ❯
data visualization platforms. Demonstrable experience planning and executing complex reporting & analytics projects across multiple stakeholders. Understanding of data quality frameworks and importance of availability of reliable data Knowledge of dimensionalmodelling and experience Strong analytical thinking and problem-solving skills with the ability to interpret complex data and provide actionable insights. Curiosity and willingness to explore complex and More ❯
data visualization platforms. Demonstrable experience planning and executing complex reporting & analytics projects across multiple stakeholders. Understanding of data quality frameworks and importance of availability of reliable data Knowledge of dimensionalmodelling and experience Strong analytical thinking and problem-solving skills with the ability to interpret complex data and provide actionable insights. Curiosity and willingness to explore complex and More ❯
capabilities, they are evolving toward a clearer separation between Data Engineering, Analytics Engineering, and Data Product disciplines. This role will sit firmly in the Analytics Engineering function, focused on modelling and building the semantic layer that powers consistent, reliable insights across the company’s BI and data science platforms. This role will focus on the “middle layer", designing dimensional … other downstream consumers. Work closely with Data Engineers responsible for ingestion (from source systems to raw layers such as S3 or cloud storage), but focus your efforts on the modelling and transformation stage. Collaborate with the Data Product team to ensure the semantic layer serves evolving business and analytical needs. Support best practices in CI/CD (using GitHub … maintaining dbt pipelines. Contribute to a common, reusable data model that serves BI, Data Science, and AI/ML teams alike. Required Skills & Experience: Strong experience with SQL and dimensionalmodelling in dbt. Proven experience building and maintaining semantic layers in modern data platforms. Familiarity with Medallion architecture, CI/CD processes (GitHub), and version-controlled data workflows. More ❯
capabilities, they are evolving toward a clearer separation between Data Engineering, Analytics Engineering, and Data Product disciplines. This role will sit firmly in the Analytics Engineering function, focused on modelling and building the semantic layer that powers consistent, reliable insights across the company’s BI and data science platforms. This role will focus on the “middle layer", designing dimensional … other downstream consumers. Work closely with Data Engineers responsible for ingestion (from source systems to raw layers such as S3 or cloud storage), but focus your efforts on the modelling and transformation stage. Collaborate with the Data Product team to ensure the semantic layer serves evolving business and analytical needs. Support best practices in CI/CD (using GitHub … maintaining dbt pipelines. Contribute to a common, reusable data model that serves BI, Data Science, and AI/ML teams alike. Required Skills & Experience: Strong experience with SQL and dimensionalmodelling in dbt. Proven experience building and maintaining semantic layers in modern data platforms. Familiarity with Medallion architecture, CI/CD processes (GitHub), and version-controlled data workflows. More ❯
capabilities, they are evolving toward a clearer separation between Data Engineering, Analytics Engineering, and Data Product disciplines. This role will sit firmly in the Analytics Engineering function, focused on modelling and building the semantic layer that powers consistent, reliable insights across the company’s BI and data science platforms. This role will focus on the “middle layer", designing dimensional … other downstream consumers. Work closely with Data Engineers responsible for ingestion (from source systems to raw layers such as S3 or cloud storage), but focus your efforts on the modelling and transformation stage. Collaborate with the Data Product team to ensure the semantic layer serves evolving business and analytical needs. Support best practices in CI/CD (using GitHub … maintaining dbt pipelines. Contribute to a common, reusable data model that serves BI, Data Science, and AI/ML teams alike. Required Skills & Experience: Strong experience with SQL and dimensionalmodelling in dbt. Proven experience building and maintaining semantic layers in modern data platforms. Familiarity with Medallion architecture, CI/CD processes (GitHub), and version-controlled data workflows. More ❯