is spent. We drive forward improved outcomes and efficiency in public services and make sure value for money is at the centre of decision-making through better evaluation, data and analysis. We advise on overall government policy on public sector pay and pensions, the biggest single driver of public spending.We collaborate with and directly support departments to deliver … Modelto share, improve and drive efficiency in the delivery of finance services. NOVA contains a full suite of functional design artefacts—Processes, KPIs, Business Glossary and a comprehensive Data Dictionary—to be adopted by all government organisations through the delivery of shared services and back-office systems. We are committed to enhancing finance performance by supporting the adoption … of the NOVA finance functional design and its data standards. Driving up finance data maturity, improving data architecture, and creating data integration solutions are central to our GFF 2030 Strategy and ambition for an Insightful and Data Driven finance function. The seventeen-person team works across a wide range of More ❯
and implementing robust, scalable, and efficient data systems that power analytics, machine learning models, and business insights. The ideal candidate will have expertise in datapipeline orchestration (e.g., Airflow), data lake and warehouse architecture and development, infrastructure as code (IaC) using Terraform , and data extraction from both structured and unstructured data … solutions. Lerage best practices in schema design, partitioning, and optimisation for efficient storage and retrieval. Build and maintain data models to support analytics and machine learning workflows. Pipeline Orchestration: Develop, monitor, and optimize ETL/ELT workflows using Apache Airflow. Ensure datapipelines are robust, error-tolerant, and scalable for real-time and batch processing. … Engineering, or a related field; or equivalent professional experience. Experience: 5+ years of experience in data engineering or a related field. Strong expertise in datapipeline orchestration tools such as Apache Airflow . Proven track record of designing and implementing data lakes and warehouses (experience with Azure is a plus). Demonstrated experience More ❯
Analyst/Engineer role is responsible for collecting, processing, and performing analyses on large datasets to extract meaningful insights. This role involves using statistical tools, programming languages, and data visualization techniques to support data-driven decision-making. The role ensures that data flows smoothly from source to destination, making it accessible for other analysts. … As our new BI Analyst/Engineer you'll have the opportunity to: Data Collection and Processing: Gather data from various sources (databases, APIs, spreadsheets), clean and validate it to ensure accuracy, and prepare it for analysis. Data Analysis: Utilize statistical methods, algorithms, and analytical techniques to analyze data sets. Identify significant … trends, patterns, and relationships. Data Visualization: Create detailed and interactive visualizations using tools like Tableau, Power BI, or custom dashboards to present findings clearly. Reporting: Develop comprehensive reports and presentations to communicate insights and recommendations to stakeholders. Design and Build DataPipelines: Develop and maintain scalable datapipelines and build out new API integrations More ❯
Company Description Hitachi Solutions Europe is a global Digital, Data and Technology consultancy, Microsoft Gold partner and Cloud Services partner, specialising in end-to-end transformation. As a global consultancy firm working across the private and public sectors, we specialise in Dynamics 365 Business Applications, Power Platform, including Azure, Application Modernisation and Data & Analytics. Our highly … practices, ensuring delivery of scalable and efficient solutions. Define high-value business scenarios that can benefit from AI solutions (Machine Learning and Gen AI). Explore and analyse data from various sources and formats using tools such as Microsoft Fabric, Azure Databricks, Azure Synapse Analytics, and Azure Machine Learning Implement datapipelines and workflows to automate … that coding, security and CI/CD best practices are followed. Communicate and present findings and recommendations to stakeholders and customers using tools such as Power BI, Azure Data Explorer, and Azure AI Services. Qualifications Key Competencies Demonstrable experience in data science, machine learning, or a related field. Proficiency in Python, SQL, or other programming languages More ❯
robust and scalable machine learning pipelines. This person will possess a strong background in DevOps practices, machine learning principles, and cloud computing platforms. You will work closely with data scientists and software engineers to streamline the deployment and monitoring of machine learning models, ensuring efficiency and reliability in ML operations. We hire based on personality, potential, and enthusiasm … best practices and methodologies. • Experience with version control systems (e.g., Git). • Familiarity with CI/CD tools and practices. • Strong problem-solving and analytical skills. • Understanding of data structures and algorithms. • Ability to design and develop scalable, efficient, and maintainable software systems. • Experience with microservice architecture, API development. Machine Learning (ML): • Deep understanding of machine learning principles … algorithms, and techniques. • Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. • Proficiency in data preprocessing, feature engineering, and model evaluation. • Knowledge of ML model deployment and serving strategies, including containerization and microservices. • Familiarity with ML lifecycle management, including versioning, tracking, and model monitoring. • Ability to optimize and fine-tune ML models More ❯
robust and scalable machine learning pipelines. This person will possess a strong background in DevOps practices, machine learning principles, and cloud computing platforms. You will work closely with data scientists and software engineers to streamline the deployment and monitoring of machine learning models, ensuring efficiency and reliability in ML operations. We hire based on personality, potential, and enthusiasm … best practices and methodologies. Experience with version control systems (e.g., Git). Familiarity with CI/CD tools and practices. Strong problem-solving and analytical skills. Understanding of data structures and algorithms. Ability to design and develop scalable, efficient, and maintainable software systems. Experience with microservice architecture, API development. Machine Learning (ML): Deep understanding of machine learning principles … algorithms, and techniques. Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. Proficiency in data preprocessing, feature engineering, and model evaluation. Knowledge of ML model deployment and serving strategies, including containerization and microservices. Familiarity with ML lifecycle management, including versioning, tracking, and model monitoring. Ability to optimize and fine-tune ML models More ❯
UK (Cannot provide sponsorship) Join a leading UK consulting and administration business specialising in the pensions and insurance sectors. As an ML Ops Engineer in our Pensions Advisory - Data Analytics department, you will be at the forefront of developing and deploying machine learning models that enhance our consulting capabilities and client offerings. Day-to-day of the role … processes. Machine Learning Operations: Design, deploy, maintain, and refine statistical and machine learning models using Azure ML. Optimize model performance and ensure smooth application operations with large-scale data handling. Data Management and Preprocessing: Manage the collection, cleaning, and preprocessing of large datasets. Implement datapipelines and ETL processes to ensure data … code in Python. Implement CI/CD practices for version control, testing, and code review. Collaboration and Training: Work closely with various teams within the organisation to integrate data science findings into practical strategies. Provide training and support to team members on machine learning tools and analytical techniques. Research and Development: Stay updated with the latest trends and More ❯
UK (Cannot provide sponsorship) Join a leading UK consulting and administration business specialising in the pensions and insurance sectors. As an ML Ops Engineer in our Pensions Advisory - Data Analytics department, you will be at the forefront of developing and deploying machine learning models that enhance our consulting capabilities and client offerings. Day-to-day of the role … processes. Machine Learning Operations: Design, deploy, maintain, and refine statistical and machine learning models using Azure ML. Optimize model performance and ensure smooth application operations with large-scale data handling. Data Management and Preprocessing: Manage the collection, cleaning, and preprocessing of large datasets. Implement datapipelines and ETL processes to ensure data … code in Python. Implement CI/CD practices for version control, testing, and code review. Collaboration and Training: Work closely with various teams within the organisation to integrate data science findings into practical strategies. Provide training and support to team members on machine learning tools and analytical techniques. Research and Development: Stay updated with the latest trends and More ❯
UK (Cannot provide sponsorship) Join a leading UK consulting and administration business specialising in the pensions and insurance sectors. As an ML Ops Engineer in our Pensions Advisory - Data Analytics department, you will be at the forefront of developing and deploying machine learning models that enhance our consulting capabilities and client offerings. Day-to-day of the role … processes. Machine Learning Operations: Design, deploy, maintain, and refine statistical and machine learning models using Azure ML. Optimize model performance and ensure smooth application operations with large-scale data handling. Data Management and Preprocessing: Manage the collection, cleaning, and preprocessing of large datasets. Implement datapipelines and ETL processes to ensure data … code in Python. Implement CI/CD practices for version control, testing, and code review. Collaboration and Training: Work closely with various teams within the organisation to integrate data science findings into practical strategies. Provide training and support to team members on machine learning tools and analytical techniques. Research and Development: Stay updated with the latest trends and More ❯
robust and scalable machine learning pipelines. This person will possess a strong background in DevOps practices, machine learning principles, and cloud computing platforms. You will work closely with data scientists and software engineers to streamline the deployment and monitoring of machine learning models, ensuring efficiency and reliability in ML operations. We hire based on personality, potential, and enthusiasm … best practices and methodologies. • Experience with version control systems (e.g., Git). • Familiarity with CI/CD tools and practices. • Strong problem-solving and analytical skills. • Understanding of data structures and algorithms. • Ability to design and develop scalable, efficient, and maintainable software systems. • Experience with microservice architecture, API development. Machine Learning (ML): • Deep understanding of machine learning principles … algorithms, and techniques. • Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. • Proficiency in data preprocessing, feature engineering, and model evaluation. • Knowledge of ML model deployment and serving strategies, including containerization and microservices. • Familiarity with ML lifecycle management, including versioning, tracking, and model monitoring. • Ability to optimize and fine-tune ML models More ❯
Job Type: Contract Job Location: Wimbledon , UK Job Description : For this role, senior experience of Data Engineering and building automated datapipelines on IBM Datastage & DB2, AWS and Databricks from source to operational databases through to curation layer is expected using the latest cloud modern technologies where experience of delivering complex pipelines will be significantly valuable … to how to maintain and deliver world class data pipelines. Knowledge in the following areas essential: Databricks: Expertise in managing and scaling Databricks environments for ETL, data science, and analytics use cases. AWS Cloud: Extensive experience with AWS services such as S3, Glue, Lambda, RDS, and IAM. IBM Skills: DB2, Datastage, Tivoli Workload Scheduler, Urban Code … for data governance, security, and data quality across our data platform. Ensure data is well-documented, accessible, and meets compliance standards. Pipeline Automation & Optimisation: Drive the automation of datapipelines and workflows to improve efficiency and reliability. Team Management: Mentor and grow a team of data engineers More ❯
JOB DESCRIPTION Job title: Data Analytics Engineer (BI-Focused) Reports to: Lead Data Manager Location: London (Holborn), remote 2d/w. JOB SUMMARY This role contributes to the success of SOFYNE by designing and implementing robust datapipelines, transforming and modelling data, and delivering insightful dashboards and reports. The Data Analytics Engineer will collaborate with stakeholders across all departments to gather business requirements, design scalable data solutions using Microsoft technologies, and deliver training to empower end users. PRINCIPAL MISSIONS Design and implement datapipelines using Azure Data Factory or Microsoft Fabric. Develop and maintain SQL-based transformations and data models … modeling skills. Proficiency in Power BI (data modeling, DAX, report design). Experience with Azure Data Factory and/or Microsoft Fabric for pipeline development (or python pipeline development) Understanding of data warehouse design and ETL/ELT best practices Strong communication and stakeholder engagement skills. Customer service mindset with More ❯
At Dufrain , we don’t just build data solutions—we shape the future of data-driven decision-making. Do you thrive on solving complex challenges and turning data into powerful insights that drive transformation? We’re looking for Data Engineers who have a broad range of data engineering skills, with … Microsoft Fabric, Azure Data Factory, or Azure Synapse. Python programming : Ability to write clean, efficient Python code for data manipulation and analysis. Datapipeline development : Experience building datapipelines for Snowflake using native tools or third-party platforms like Informatica, Fivetran, or Matillion. End-to-end data project delivery … ETL/ELT, data lakes, warehousing, MDM, and BI. Engineering delivery practices : Understanding of Agile and DevOps methodologies, including Git, APIs, Containers, Microservices, and datapipeline orchestration. Data architecture and modelling : Familiarity with approaches such as Inmon, Kimball, or DataVault. Nice to have certifications: SnowPro Advanced Data Engineer SnowPro Advanced Administrator More ❯
Job Summary Job Description What is the opportunity? We have an exciting opportunity for a Data Engineer to join the team in our London/Newcastle offices. The successful candidate will work closely with business and technology teams across Wealth Management Europe (WME) to support the ongoing maintenance and evolution of the Data Lakehouse platform, focusing … on the ingestion and modelling of new data, and the evolution of the platform itself utilising new technologies to improve performance and accuracy of the data. What will you do? Responsible for the development and ongoing maintenance of the Data Lakehouse platform infrastructure using the Microsoft Azure technology stack, including Databricks and Data Factory. … Hub and the supporting processes like Data Integration, Governance, Metadata Management Strong experience in working with large, heterogeneous datasets in building and optimizing datapipelines, pipeline architectures and integrated datasets using traditional data integration technologies Strong experience with popular database programming languages for relational databases (SQL, T-SQL) Knowledge of working with SQL More ❯
Data Engineering Consultant (all levels) We’re working with a globally recognised consultancy that delivers cutting-edge digital, data, and analytics solutions to some of the most complex infrastructure projects across the UK and beyond. With a strong commitment to technical excellence, inclusion, and client impact, they partner with high-profile organisations to help them unlock … the full potential of their data assets. Due to continued growth and demand for their data services, they are expanding their specialist engineering team and seeking talented Data Engineering Consultants at various seniority levels — from experienced engineers to those ready to step into leadership roles. The Role As a Data Engineering Consultant … solutions, delivery plans, and architectural designs Build and maintain highly automated, scalable datapipelines using modern cloud-based tools Identify and resolve data quality and pipeline performance issues to ensure stability and integrity Contribute to the design and deployment of data lakes, warehouses, and streaming architectures Participate in mentoring and upskilling junior team More ❯
Data Engineering Consultant (all levels) We’re working with a globally recognised consultancy that delivers cutting-edge digital, data, and analytics solutions to some of the most complex infrastructure projects across the UK and beyond. With a strong commitment to technical excellence, inclusion, and client impact, they partner with high-profile organisations to help them unlock … the full potential of their data assets. Due to continued growth and demand for their data services, they are expanding their specialist engineering team and seeking talented Data Engineering Consultants at various seniority levels — from experienced engineers to those ready to step into leadership roles. The Role As a Data Engineering Consultant … solutions, delivery plans, and architectural designs Build and maintain highly automated, scalable datapipelines using modern cloud-based tools Identify and resolve data quality and pipeline performance issues to ensure stability and integrity Contribute to the design and deployment of data lakes, warehouses, and streaming architectures Participate in mentoring and upskilling junior team More ❯
Social network you want to login/join with: We are on the lookout for a Data Engineer to join a dynamic, forward-thinking company. You’ll will work in various settings to build systems that collect, manage, and convert raw data into usable information for data scientists and business analysts to interpret. The … ultimate goal is to make data accessible so that stakeholders can use it to evaluate and optimise their performance. What you’ll be responsible for: Support designing, developing, implementing, managing and supporting enterprise-level ELT/ELT processes and environment Technical and business processes are used to combine data from multiple sources to provide a unified … including design patterns innovation, data lifecycle design, data ontology alignment, annotated datasets, and elastic search approaches Developing, creating and maintaining a reliable datapipeline and schemas that feed other data processes; includes both technical processes and business logic to transform data from disparate sources into cohesive meaningful and valuable More ❯
London, England, United Kingdom Hybrid / WFH Options
Yapily Ltd
global open economy that works for everyone. We exist behind the scenes, securely connecting companies - from growth to enterprise - to thousands of banks worldwide, enabling them to access data and initiate payments through the power of open banking. What we’re looking for As a Java Software Engineer focused on Data Products at Yapily, you will … play a key role in designing and implementing our next-generation data systems. You’ll be responsible for developing high-performance datapipelines, billing infrastructure and APIs that power our suite of products – including Reports API, Analytics API and Insights API – ensuring data is reliably processed and securely delivered to our customers. Develop & Optimize … data processing. Experience supporting BI tools and data visualization platforms, particularly Looker. Knowledge of version control and CI/CD practices for datapipeline deployment. Experience monitoring and troubleshooting datapipelines in production environments. Understanding of data security best practices and encryption methods for sensitive data. Ability to optimize More ❯
Employees - We have over 280 passionate employees transforming the industry across the Group, one home at a time The Role We're seeking a motivated and detail orientated Data Engineer with experience in building and managing scalable datapipelines and infrastructure. You'll be responsible for developing robust ETL/ELT processes, optimising and maintaining data warehouse, and ensuring data quality and standards through comprehensive monitoring and governance. Collaborating closely with our Data, Tech and Product teams, you'll help shape and support our data strategy. The ideal candidate should have familiarity in SQL and cloud-based data solutions, be driven towards automated solutions and thrives … using BigQuery or similar cloud data solutions. Technical Skills : Strong proficiency in SQL and ETL/ELT frameworks and experience with data modelling, optimisation, and pipeline orchestration. Python or similar programming language is a nice to have! Cloud Proficiency : Experience working with cloud platforms such as Google Cloud Platform (GCP) or AWS, particularly with services More ❯
London, England, United Kingdom Hybrid / WFH Options
Hometree Group
Employees - We have over 280 passionate employees transforming the industry across the Group, one home at a time The Role We're seeking a motivated and detail orientated Data Engineer with experience in building and managing scalable datapipelines and infrastructure. You'll be responsible for developing robust ETL/ELT processes, optimising and maintaining data warehouse, and ensuring data quality and standards through comprehensive monitoring and governance. Collaborating closely with our Data, Tech and Product teams, you'll help shape and support our data strategy. The ideal candidate should have familiarity in SQL and cloud-based data solutions, be driven towards automated solutions and thrives … using BigQuery or similar cloud data solutions. Technical Skills : Strong proficiency in SQL and ETL/ELT frameworks and experience with data modelling, optimisation, and pipeline orchestration. Python or similar programming language is a nice to have! Cloud Proficiency : Experience working with cloud platforms such as Google Cloud Platform (GCP) or AWS, particularly with services More ❯
the latest version of a product that allows our customers to report and visualize their key employee/HR data. Role Responsibilities: What you'll do DataPipeline Development: Design, develop, and maintain robust and scalable datapipelines to support the extraction, transformation, and loading (ETL) of data from diverse sources into our … data warehouse. Data Modeling: Create and manage data models, ensuring data integrity, consistency, and accuracy. Collaborate with stakeholders to understand data requirements and design appropriate data structures. Performance Optimization: Identify and implement performance improvements for data processing and storage, working to optimize query performance and reduce … latency. Data Security and Governance: Implement and enforce data security measures and governance policies to protect sensitive information. Ensure compliance with data privacy regulations. Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and software engineers, to understand their data needs and provide timely and accurate support. Documentation More ❯
London, England, United Kingdom Hybrid / WFH Options
Deel
Join to apply for the Data Engineer role at Deel Join to apply for the Data Engineer role at Deel Who We Are Is What We Do. Deel is the all-in-one payroll and HR platform for global teams. Our vision is to unlock global opportunity for every person, team, and business. Built for the … a sought-after leader in the transformation of global work. The Team The Data Platform team at Deel is dedicated to enhancing data quality, optimizing pipeline performance, building robust platform tools, and managing costs across the entire data stack—from ingestion to outbound integrations and everything in between. As a Data Engineer on this team, you’ll play a critical role in shaping the future of Deel’s data infrastructure, ensuring it scales effectively with 30+ Analytics Engineers and 100+ data professionals embedded across the organization. Our team collaborates cross-functionally with analysts, analytics engineers, data scientists, software engineers, and leadership to achieve More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
Data Engineer Location: London (Hybrid - 3 days/week in office or client site) Day Rate: £400-£425/day (Outside IR35) Start Date: ASAP Duration: 6 months NO SPONSORSHIP AVAILABLE An established London-based logistics company is undergoing a major transformation in its data and analytics capabilities and is seeking a Data Engineer … to support the build and optimisation of their next-generation data platform. Working as part of a growing data team, you will play a critical role in designing and deploying scalable datapipelines and solutions using Azure Databricks and related technologies. This is an opportunity to contribute to a cloud-first, modern data strategy within a collaborative and forward-thinking environment. Key Responsibilities: Design and develop end-to-end datapipelines (batch and streaming) using Azure Databricks, Spark, and Delta Lake. Implement the Medallion Architecture and ensure consistency across raw, enriched, and curated data layers. Build and optimise ETL/ELT processes using Azure DataMore ❯
West London, London, England, United Kingdom Hybrid / WFH Options
Delaney & Bourton
Role: BI & Data Engineering Lead Location: Hybrid 2 days in the office, London Salary: circa £75k-£80k + Benefits Package Organisation: A B2B organisation hugely fuelled by untapped Data with a super impressive Fortune 500 client base, well positioned for future growth and opportunity Role: A unique opportunity for someone that enjoys variety and challenge. This … role will touch all parts of BI & Data Engineering, from strategy > tooling > architecture > implementation as well as working closely with the organisation to gain more value from unstructured, complex data. A player/coach role, this role will line manage a small and growing team, as well as keep their hands dirty, specifically with architecture, data engineering (datapipelines, warehouses and transformation logic), as well have accountability for the team that deliver the BI reporting solutions (Power BI) to the organisation This role will be pivotal in enabling the business to turn data driven decision making into reality. This will result in significant business value and opportunity. Well suited to More ❯
Job Title: Contract Senior QA Engineer – Data Analysis Contract Length: 12 Months (with potential extension) Day Rate: Up to £475 per day (Inside IR35) Start Date: ASAP About the Role I'm seeking a highly skilled Senior QA Engineer with strong experience in data migration, analysis, quality and validation to join an outstanding flagship digital team … on a contract basis. You’ll play a key role in ensuring the accuracy, reliability, and performance of their datapipelines and analytics products. This is an excellent opportunity to work with a cross-functional team of … engineers, analysts, and data scientists in a fast-paced environment where quality and precision are critical. Key Responsibilities Design, develop, and execute test plans for ETL pipelines, data transformations, and analytics products Develop automated tests and validation tools for data integrity, accuracy, completeness, and performance Identify, document, and track defects in large-scale dataMore ❯