Enterprise Data Architect/Senior Solution Data Architect/Head of Data Architecture Data is responsible for shaping, governing, and enabling data-centric solution design across the clients global technology landscape. The role ensures that all data and analytics solutions align to enterprise architecture principles, support the North Star vision, and deliver business value … connected, and governed data. Working across programmes such as UNIFY (SAP S/4HANA), Integrated Supply Chain (Blue Yonder), and Sales Excellence (Salesforce CGC), the role bridges business strategy, data architecture, and solution delivery to ensure a unified enterprise data fabric and analytics capability. Key Accountability's Solution Leadership Lead end-to-end data solution design across … global initiatives, ensuring consistency with enterprise data architecture standards and integration principles. Translate business and information requirements into scalable, secure, and performant data solutions leveraging the enterprise platforms (Azure, Databricks, Power BI, SAP BTP, Salesforce, etc.). Ensure data models, pipelines, and analytics solutions are built with reuse, interoperability, and data quality in mind. Enterprise Alignment More ❯
Enterprise Data Architect/Senior Solution Data Architect/Head of Data Arcitecture Data is responsible for shaping, governing, and enabling data-centric solution design across the clients global technology landscape. The role ensures that all data and analytics solutions align to enterprise architecture principles, support the North Star vision, and deliver business value … connected, and governed data. Working across programmes such as UNIFY (SAP S/4HANA), Integrated Supply Chain (Blue Yonder), and Sales Excellence (Salesforce CGC), the role bridges business strategy, data architecture, and solution delivery to ensure a unified enterprise data fabric and analytics capability. Key Accountability's Solution Leadership Lead end-to-end data solution design across … global initiatives, ensuring consistency with enterprise data architecture standards and integration principles. Translate business and information requirements into scalable, secure, and performant data solutions leveraging the enterprise platforms (Azure, Databricks, Power BI, SAP BTP, Salesforce, etc.). Ensure data models, pipelines, and analytics solutions are built with reuse, interoperability, and data quality in mind. Enterprise Alignment More ❯
Central London, London, England, United Kingdom Hybrid/Remote Options
E-Solutions IT Services UK Ltd
Position: PCT Data Engineer Location: Central London Hybrid work (3 days per week onsite) Description of role and key responsibilities: Primary function of the role is to deliver high quality data engineering solutions to business and end users across Private Client Transactional and Lending teams – either directly via self-service data products, or by working closely with … the Analytics team, providing modelled data products on which they can add reporting and analytics. The candidate will be required to deliver to all stages of the data engineering process – dataingestion, transformation, data modelling and data warehousing, and build self-service data products. The role is a mix of Azure cloud delivery … work closely with our Architect, Engineering lead, Analytics team, DevOps, DBAs, and upstream Application teams in Private Client Technology Specifically, the person will: Work closely with end-users and Data Analysts to understand the business and their data requirements Carry out ad hoc data analysis and ‘data wrangling’ using Synapse Analytics and Databricks Building dynamic meta More ❯
Azure Data Engineer VIQU has partnered with a leading Telecommunications organisation seeking an experienced Azure Data Engineer to design, develop, and manage scalable data solutions on the Microsoft Azure platform. You’ll play a key role in architecting and implementing robust data integration solutions, as well as building ETL pipelines to support dataingestion, transformation, and loading. The Role:As an Azure data engineer, you’ll collaborate with cross-functional teams in an agile environment — contributing to sprint planning, daily stand-ups, and other agile ceremonies. Strong communication skills are essential, as you’ll often translate complex data concepts into clear insights for non-technical stakeholders. Key Responsibilities: Architect, design, and … implement scalable Azure-based data solutions. Develop and maintain ETL processes for dataingestion, transformation, and loading. Ensure data governance, integrity, and quality throughout the data lifecycle. Implement robust data security, compliance, and privacy standards. Document data architectures, data flows, and processes for knowledge sharing and audit readiness. Continuously enhance workflows using More ❯
Data Engineer (Databricks & Azure) - 3-Month Rolling Contract Rate: £400-£450 per day Location: Remote IR35 Status: Outside IR35 Duration: Initial 3 months (rolling) About the Company Join a leading Databricks Partner delivering innovative data solutions for enterprise clients. You'll work on cutting-edge projects leveraging Databricks and Azure to transform data into actionable insights. About … the Role We are seeking an experienced Data Engineer with strong expertise in Databricks and Azure to join our team on a 3-month rolling contract. This is a fully remote position, offering flexibility and autonomy while working on high-impact data engineering initiatives. Key Responsibilities Design, develop, and optimize data pipelines using Databricks and Azure Data Services . Implement best practices for dataingestion, transformation, and storage. Collaborate with stakeholders to ensure data solutions meet business requirements. Monitor and troubleshoot data workflows for performance and reliability. Essential Skills Proven experience with Databricks (including Spark-based data processing). Strong knowledge of Azure Data Platform (Data Lake, Synapse, etc. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
Data Engineer (Databricks & Azure) - 3-Month Rolling Contract Rate: £400-£450 per day Location: Remote IR35 Status: Outside IR35 Duration: Initial 3 months (rolling) About the Company Join a leading Databricks Partner delivering innovative data solutions for enterprise clients. You'll work on cutting-edge projects leveraging Databricks and Azure to transform data into actionable insights. About … the Role We are seeking an experienced Data Engineer with strong expertise in Databricks and Azure to join our team on a 3-month rolling contract. This is a fully remote position, offering flexibility and autonomy while working on high-impact data engineering initiatives. Key Responsibilities Design, develop, and optimize data pipelines using Databricks and Azure Data Services . Implement best practices for dataingestion, transformation, and storage. Collaborate with stakeholders to ensure data solutions meet business requirements. Monitor and troubleshoot data workflows for performance and reliability. Essential Skills Proven experience with Databricks (including Spark-based data processing). Strong knowledge of Azure Data Platform (Data Lake, Synapse, etc. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
AWS Data Engineer Rate: Up to £400 per day (Outside IR35) Contract Length: 6 Months (with strong potential for extension) Location: Hybrid - 1 day per week onsite in Central London A leading organisation is seeking an experienced AWS Data Engineer to join their data and analytics team, contributing to the design, development and optimisation of large-scale … data solutions within a modern cloud environment. This contract offers the opportunity to work on high-impact projects, delivering data platforms and pipelines that drive real-time insights and strategic business decisions. Responsibilities for the AWS Data Engineer: Design, build and maintain scalable data pipelines and architectures within the AWS ecosystem Leverage services such as AWS … Glue, Lambda, Redshift, EMR and S3 to support dataingestion, transformation and storage Work closely with data analysts, architects and business stakeholders to translate requirements into robust technical solutions Implement and optimise ETL/ELT processes , ensuring data integrity, consistency and quality across multiple sources Apply best practices in data modelling, version control, and CI More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Harnham - Data & Analytics Recruitment
Contract Data Engineer £500/day - Outside IR35 6 Months (initial) Fully Remote A major hospitality operator is embarking on a large-scale data transformation project to build a Single Customer View (SCV) that will power personalised engagement across its portfolio of brands. This is a greenfield opportunity to shape the data foundation behind one of the … UK's biggest customer data platforms (CDP). You'll work alongside the Lead Data Engineer to design and implement scalable data pipelines, unify disparate data sources, and deliver clean, reliable datasets that drive smarter marketing and customer insights. THE ROLE: As a Data Engineer , you will: Design and build robust data pipelines to … consolidate customer data into a single, trusted view. Clean, transform, and enrich data for marketing activation and segmentation within the CDP. Collaborate with cross-functional teams to translate business requirements into technical solutions. Automate dataingestion, transformation, and testing processes aligned with DevOps best practices. Implement data quality frameworks and validation tests to ensure accuracy More ❯
We are searching for one of our key clients a Senior Data Platform Engineer to support and optimise large scale data ecosystems across PostgreSQL, Snowflake and Greenplum. This is a high-impact role within a modern digital environment, ideal for someone who thrives on complex data challenges and enterprise-grade engineering. Youll be responsible for designing scalable … data models, optimising performance across multiple database technologies, and enabling seamless dataingestion pipelines. Expect to work in a cloud-driven setting with Azure, modern tooling, and critical data platforms that underpin major transformation programmes. Key skills required Deep knowledge of PostgreSQL, Snowflake and Greenplum Snowflake internals, schemas, modelling, data lakes and integration patterns Dataingestion using Informatica, Talend and similar ETL tooling Strong experience handling JSON, XML, CSV and multi-source datasets Patroni expertise for HADR and streaming replication Backup, recovery, tuning and optimisation across Postgres, Snowflake and Greenplum Understanding of Azure environments If youre ready to join a forward-thinking organisation delivering enterprise-level data solutions, please apply with an More ❯
Data Platform Engineer DV Cleared £500 - £600 per day - Outside IR35 JOB DESCRIPTION This role requires strong expertise in building and managing data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi. The successful candidate will design, implement, and maintain scalable, secure data solutions, ensuring compliance with strict security standards and regulations. This is a … UK based onsite role with the option of compressed hours. The role will include: Design, develop, and maintain secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi. Implement dataingestion, transformation, and integration processes, ensuring data quality and security. Collaborate with data architects and security teams to ensure compliance … with security policies and data governance standards. Manage and monitor large-scale data flows in real-time, ensuring system performance, reliability, and data integrity. Develop robust data models to support analytics and reporting within secure environments. Perform troubleshooting, debugging, and performance tuning of data pipelines and the Elastic Stack. Build dashboards and visualizations in Kibana More ❯
IR35) Due to the nature of the role, candidates must hold active DV clearance. Role details: Our client, a leading defence and security company, are looking for DV cleared Data Engineers to join their team on a contract basis. This is a fully onsite role with the option of compressed hours. This role requires strong expertise in building and … managing data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi. The successful candidate will design, implement, and maintain scalable, secure data solutions, ensuring compliance with strict security standards and regulations. Responsibilities not limited to: Design, develop, and maintain secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi. Implement … dataingestion, transformation, and integration processes, ensuring data quality and security. Collaborate with data architects and security teams to ensure compliance with security policies and data governance standards. Manage and monitor large-scale data flows in real-time, ensuring system performance, reliability, and data integrity. Develop robust data models to support analytics More ❯
Darlington, County Durham, North East, United Kingdom Hybrid/Remote Options
Inspire People
maintain and iterate a web API using FastAPI and hosted on Amazon Web Services (AWS). - Developing improvements to the existing product, following an existing roadmap focusing on improved dataingestion, UI updates (supported by Fullstack Developer) and potential agentic possibilities. - Helping design, develop and deliver a new capability focusing on turning existing audience data and insights … including Generative AI solutions around RAG and prompt engineering. - Contributing to the required documentation and Agile project maintenance responsibilities. - Helping to design and develop incoming features around sentiment analysis, dataingestion and synthetic audience creation. Essential Skills for the AI Engineer: - Must have experience in building Large Language Model based applications incorporating tool usage and information retrieval. - Must More ❯
Migration Specialist to lead and support the migration of observability workloads from Splunk to Elasticsearch (ELK Stack) . The ideal candidate will bring hands-on expertise in Splunk architecture, dataingestion, alerting, and dashboarding, along with experience migrating workloads to Elasticsearch. In addition to migration duties, the candidate will maintain and enhance existing Splunk infrastructure, provide incident support … and problem-solving skills. Key Responsibilities: Migration: Develop and implement a comprehensive migration strategy from Splunk to Elasticsearch (ELK Stack). Assess existing Splunk configurations (dashboards, alerts, saved searches, data models) and recreate them in Kibana. Collaborate with Elastic teams to configure alerting and monitoring using Kibana, Elasticsearch Watcher, or third-party tools. Ensure migration plans include validation, rollback … and Support: Identify and resolve issues in Splunk and ELK environments. Assist teams with Splunk-related queries and optimization efforts. Skills and Qualifications: Essential: Proven expertise with Splunk architecture , dataingestion, dashboarding, alerting, and administration. Experience migrating Splunk workloads to Elasticsearch (ELK Stack) . Solid understanding of Kibana , Elasticsearch Watcher , and observability tooling. Proficiency in Linux/Unix More ❯
Python Data Engineer Azure & PySpark - SC Cleared Contract £400-£458pd (Inside IR35) SC Clearance is Essential Summary Were looking for a Python Data Engineer skilled in PySpark, Delta Lake, Azure services, containerized development, and Behave-based testing. Youll design and build scalable data pipelines and maintain high-quality, test-driven code in a cloud environment. What youll … do Build and maintain Python/PySpark pipelines for dataingestion, processing, and validation. Write unit and BDD tests using Behave, including mocking and patching. Create and optimize Delta Lake tables for reliable, performant data storage. Use Docker to manage consistent development, testing, and deployment environments. Build configurable, parameter-driven code for modular data solutions. Work … with Azure Functions, Key Vault, and Blob Storage for cloud-based workflows. Collaborate with architects, data scientists, and DevOps on CI/CD and deployment. Tune and troubleshoot Spark jobs in production. Document solutions and follow cloud security and governance best practices. Skills you need Strong Python skills with a focus on clean, test-driven code. Experience writing Behave More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Robert Half
for containerised model deployments on AWS SageMaker , with GenAI/agentic capabilities embedded into the platform. You will also lead a team of four developers and collaborate closely with data engineering, data science and platform teams. Assignment Details Initial Duration : 6 months (strong likelihood of extension) Location : Hybrid/London Day Rate : £flexible PAYE Day Rate (plus 12.07 … Python Hands-on experience with GitLab CI/CD Extensive Docker containerisation & runtime experience Strong understanding of AWS SageMaker deployments Experience building developer environments for ML/quant/data-intensive workflows Knowledge of monitoring, logging, reporting and test automation Proven team leadership of small development teams Strong understanding of dataingestion, preprocessing and transformation Experience embedding … ranges are dependent upon your experience, qualifications and training. If you wish to apply, please read our Privacy Notice describing how we may process, disclose and store your personal data: gb/en/privacy-notice. More ❯
About the RoleWe are looking for a Python Data Engineer with strong hands-on experience in Behave-based unit testing, PySpark development, Delta Lake optimisation, and Azure cloud services. This role focusses on designing and deploying scalable data processing solutions in a containerised environment, emphasising maintainable, configurable, and test-driven code delivery.Key Responsibilities Develop and maintain dataingestion, transformation, and validation pipelines using Python and PySpark. Implement unit and behavior-driven testing with Behave, ensuring robust mocking and patching of dependencies. Design and maintain Delta Lake tables for optimised query performance, ACID compliance, and incremental data loads. Build and manage containerised environments using Docker for consistent development, testing, and deployment. Develop configurable, parameter-driven … codebases to support modular and reusable data solutions. Integrate Azure services, including: Azure Functions for serverless transformation logic Azure Key Vault for secure credential management Azure Blob Storage for data lake operations What We're Looking For Proven experience in Python, PySpark, and Delta Lake. SC Cleared Strong knowledge of Behave for test-driven development. Experience with Docker More ❯
About the RoleWe are looking for a Python Data Engineer with strong hands-on experience in Behave-based unit testing, PySpark development, Delta Lake optimisation, and Azure cloud services. This role focusses on designing and deploying scalable data processing solutions in a containerised environment, emphasising maintainable, configurable, and test-driven code delivery. Key Responsibilities Develop and maintain dataingestion, transformation, and validation pipelines using Python and PySpark. Implement unit and behavior-driven testing with Behave, ensuring robust mocking and patching of dependencies. Design and maintain Delta Lake tables for optimised query performance, ACID compliance, and incremental data loads. Build and manage containerised environments using Docker for consistent development, testing, and deployment. Develop configurable, parameter-driven … codebases to support modular and reusable data solutions. Integrate Azure services, including: Azure Functions for serverless transformation logic Azure Key Vault for secure credential management Azure Blob Storage for data lake operations What We're Looking For Proven experience in Python, PySpark, and Delta Lake. SC Cleared Strong knowledge of Behave for test-driven development. Experience with Docker More ❯
Eastleigh, Hampshire, South East, United Kingdom Hybrid/Remote Options
Manpower
repair, and overhaul (MRO) sites Job Description Summary The Ground Application Software Engineer designs, builds, and sustains ground-based software applications and services that support mission-critical operations, including dataingestion, processing, and integration with back-end services and hardware systems. This role partners closely with product, systems, platform, data, and test teams to deliver high-quality … interfaces and operator tools that are reliable and performance oriented. Integrate with existing software, simulators, test equipment, and external systems using well-defined contracts and protocols Implement and maintain data schemas, messaging, and event-driven integrations. Establish unit, integration, and end-to-end tests Experience in writing and developing test cases Experience in developing and maintaining automated testing software. … Apply secure coding practices, identity and access controls, and data protection. Participate in sprint planning, code reviews, and design reviews; contribute to documentation and user manuals Support deployments, environment configuration, and deployment activities. Work cross-functionally with project managers, systems engineers, and end users to refine requirements Drive root-cause analysis, retrospectives, and improvements that enhance Safety, Quality, Delivery More ❯
application. The system is primarily built using Python, Django, PostgreSQL , and Celery for asynchronous task processing.The role involves working within an established multidisciplinary team to improve system stability, optimise data workflows, and contribute to a longer-term transition toward more modern, performant web technologies.Due to the nature of the role, active SC clearance is required. Key Responsibilities Maintain, extend … and refactor an existing Django/Python codebase. Work with PostgreSQL , including schema updates, dataingestion, performance tuning, and troubleshooting. Support and improve asynchronous processing pipelines using Celery . Contribute to technical decision-making around simplification, modularisation, and reducing architectural complexity. Ensure code quality through peer reviews, automated testing, and documentation. Actively participate in agile ceremonies and continuous More ❯