fraudulent registrations out of 1 million annually — and saving the UK taxpayer hundreds of millions in lost revenue. To enhance these capabilities, the team is migrating and automating dataingestion and risk analytics processes within the SAS Viya 4 platform, leveraging 20+ data sources from the Minerva Oracle estate. This new test role sits at the heart … of that transformation — ensuring that automated testing, data quality, and performance assurance are embedded into every stage of delivery. The Role: We’re looking for a Network Test Specialist/Test Automation Engineer with strong experience in data and analytics platforms, ideally including SAS Viya. You’ll join a newly formed agile team within the Minerva platform programme … working closely with developers, DevOps engineers, and analysts to build robust automated test frameworks supporting the ingestion, analytics, and Intelligent Decisioning capabilities of the platform. You’ll own the automation test strategy, build frameworks from the ground up, integrate them into CI/CD pipelines, and ensure that every component — from data pipelines to user interfaces — is validated More ❯
fraudulent registrations out of 1 million annually - and saving the UK taxpayer hundreds of millions in lost revenue. To enhance these capabilities, the team is migrating and automating dataingestion and risk analytics processes within the SAS Viya 4 platform, leveraging 20+ data sources from the Minerva Oracle estate. This new test role sits at the heart … of that transformation - ensuring that automated testing, data quality, and performance assurance are embedded into every stage of delivery. The Role: We're looking for a Network Test Specialist/Test Automation Engineer with strong experience in data and analytics platforms, ideally including SAS Viya. You'll join a newly formed agile team within the Minerva platform programme … working closely with developers, DevOps engineers, and analysts to build robust automated test frameworks supporting the ingestion, analytics, and Intelligent Decisioning capabilities of the platform. You'll own the automation test strategy, build frameworks from the ground up, integrate them into CI/CD pipelines, and ensure that every component - from data pipelines to user interfaces - is validated More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Oliver James
AI/ML products who thrives in dynamic, fast-paced environments and enjoys working closely with small, high-impact teams. You will lead initiatives involving AI-driven system upgrades, data platform modernization, and greenfield builds, collaborating directly with director-level stakeholders to align technical innovation with business outcomes. Your work will help shape intelligent solutions that drive measurable value … through automation, personalization, and data insights. Key Responsibilities Own and manage the product backlog with a focus on AI/ML feature development, turning business needs into actionable user stories and clear acceptance criteria. Lead end-to-end initiatives involving AI-powered applications, from dataingestion and model deployment to user-facing functionality. Collaborate with data … customer, advocating for responsible AI adoption and transparency across the product lifecycle. Skills & Experience Proven experience as a Product Owner (or similar role) working on AI/ML or data-driven products in agile/scrum teams. Demonstrated success leading AI product development, including model integrations, intelligent workflows, or predictive analytics. Strong ability to translate complex AI concepts into More ❯
of solutions Drive architectural and design decisions, ensuring scalable, resilient systems built on sound engineering principles and best practices Partner with clients to define and evolve their technology and data strategy, modernizing infrastructure, architecture, and technology stacks Utilize DevOps tools and practices to automate and streamline the build and deployment processes Work closely with Data Scientists and Engineers … to deliver robust, production-level AI and Machine Learning systems Develop frameworks and tools for efficient dataingestion from diverse and complex sources Operate in short, iterative sprints, delivering working software aligned with clear deliverables and client-defined deadlines Demonstrate flexibility by learning and working across multiple programming languages and technologies as required Additional Responsibilities: Actively contribute to … an added advantage Understanding of web APIs, contracts and communication protocols Understanding of Cloud platforms, infra-automation/DevOps, IaC/GitOps/Containers, design and development of large data platforms A maker’s mindset - To be resourceful and have the ability to do things that have no instructions What will you experience in terms of culture at Sahaj More ❯
of solutions Drive architectural and design decisions, ensuring scalable, resilient systems built on sound engineering principles and best practices Partner with clients to define and evolve their technology and data strategy, modernizing infrastructure, architecture, and technology stacks Utilize DevOps tools and practices to automate and streamline the build and deployment processes Work closely with Data Scientists and Engineers … to deliver robust, production-level AI and Machine Learning systems Develop frameworks and tools for efficient dataingestion from diverse and complex sources Operate in short, iterative sprints, delivering working software aligned with clear deliverables and client-defined deadlines Demonstrate flexibility by learning and working across multiple programming languages and technologies as required Additional Responsibilities: Actively contribute to … an added advantage Understanding of web APIs, contracts and communication protocols Understanding of Cloud platforms, infra-automation/DevOps, IaC/GitOps/Containers, design and development of large data platforms A maker’s mindset - To be resourceful and have the ability to do things that have no instructions What will you experience in terms of culture at Sahaj More ❯
london (city of london), south east england, united kingdom
Sahaj Software
of solutions Drive architectural and design decisions, ensuring scalable, resilient systems built on sound engineering principles and best practices Partner with clients to define and evolve their technology and data strategy, modernizing infrastructure, architecture, and technology stacks Utilize DevOps tools and practices to automate and streamline the build and deployment processes Work closely with Data Scientists and Engineers … to deliver robust, production-level AI and Machine Learning systems Develop frameworks and tools for efficient dataingestion from diverse and complex sources Operate in short, iterative sprints, delivering working software aligned with clear deliverables and client-defined deadlines Demonstrate flexibility by learning and working across multiple programming languages and technologies as required Additional Responsibilities: Actively contribute to … an added advantage Understanding of web APIs, contracts and communication protocols Understanding of Cloud platforms, infra-automation/DevOps, IaC/GitOps/Containers, design and development of large data platforms A maker’s mindset - To be resourceful and have the ability to do things that have no instructions What will you experience in terms of culture at Sahaj More ❯
and implement model registries, versioning systems, and experiment tracking to ensure full reproducibility of all model releases. Deploy ML workflows using tools like Airflow or similar, managing dependencies from dataingestion through model deployment and serving. Instrument comprehensive monitoring for model performance, data drift, prediction quality, and system health. Manage infrastructure as code (Terraform, or similar) for … or similar model management systems. Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. … and reproducibility, but you also enable teams to ship fast. Nice to Have Experience delivering API services (FastAPI, SpringBoot or similar). Experience with message brokers and real-time data and event processing (Kafka, Pulsar, or similar). Why Join Us You'll be part of a small, high-output team where intensity and focus are the norm. You More ❯
and implement model registries, versioning systems, and experiment tracking to ensure full reproducibility of all model releases. Deploy ML workflows using tools like Airflow or similar, managing dependencies from dataingestion through model deployment and serving. Instrument comprehensive monitoring for model performance, data drift, prediction quality, and system health. Manage infrastructure as code (Terraform, or similar) for … or similar model management systems. Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. … and reproducibility, but you also enable teams to ship fast. Nice to Have Experience delivering API services (FastAPI, SpringBoot or similar). Experience with message brokers and real-time data and event processing (Kafka, Pulsar, or similar). Why Join Us You'll be part of a small, high-output team where intensity and focus are the norm. You More ❯
london (city of london), south east england, united kingdom
algo1
and implement model registries, versioning systems, and experiment tracking to ensure full reproducibility of all model releases. Deploy ML workflows using tools like Airflow or similar, managing dependencies from dataingestion through model deployment and serving. Instrument comprehensive monitoring for model performance, data drift, prediction quality, and system health. Manage infrastructure as code (Terraform, or similar) for … or similar model management systems. Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. … and reproducibility, but you also enable teams to ship fast. Nice to Have Experience delivering API services (FastAPI, SpringBoot or similar). Experience with message brokers and real-time data and event processing (Kafka, Pulsar, or similar). Why Join Us You'll be part of a small, high-output team where intensity and focus are the norm. You More ❯
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
london (city of london), south east england, united kingdom
HCLTech
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
building knowledge graphs Familiarity with the latest Generative AI developments such as LLM architectures, fine-tuning strategies, Agentic workflows Experience in observability tooling for distributed AI systems. Understanding of dataingestion and transformation pipelines supporting vector and knowledge graph stores. Proven ability to own feature delivery end-to-end. Strong front-end development expertise is essential, with proven More ❯
ELK SME Extension Professional experience in the design, maintenance and management of Elastic stacks (Elasticsearch, Logstash, Kibana) Experience of configuring and maintaining large Elastic cluster Experience working with large data sets and elastic indexing best practices. Good understanding of Visualisation components and techniques in Elasticsearch. Proven experience in performance management and tuning of Elasticsearchenvironment. Strong experience in writing dataingestion pipelines using Logstash and other big data technologies. More ❯
ELK SME Extension Professional experience in the design, maintenance and management of Elastic stacks (Elasticsearch, Logstash, Kibana) Experience of configuring and maintaining large Elastic cluster Experience working with large data sets and elastic indexing best practices. Good understanding of Visualisation components and techniques in Elasticsearch. Proven experience in performance management and tuning of Elasticsearchenvironment. Strong experience in writing dataingestion pipelines using Logstash and other big data technologies. More ❯
ELK SME Extension · Professional experience in the design, maintenance and management of Elastic stacks (Elasticsearch, Logstash, Kibana) · Experience of configuring and maintaining large Elastic cluster · Experience working with large data sets and elastic indexing best practices. · Good understanding of Visualisation components and techniques in Elasticsearch. · Proven experience in performance management and tuning of Elasticsearch environment. · Strong experience in writing … dataingestion pipelines using Logstash and other big data technologies. More ❯
Edinburgh, Midlothian, Scotland, United Kingdom Hybrid / WFH Options
Atrium Workforce Solutions Ltd
ELK SME Extension Professional experience in the design, maintenance and management of Elastic stacks (Elasticsearch, Logstash, Kibana) Experience of configuring and maintaining large Elastic clusters Experience working with large data sets and elastic indexing best practices. Good understanding on Visualisation components and techniques in Elasticsearch. Proven experience in performance management and tuning of Elasticsearch environment. Strong experience in writing … dataingestion pipelines using Logstash and other big data technologies. Please feel free to contact myself – Daisy Nguyen at Gibbs Consulting/Atrium UK for a confidential chat to know more details about the role. Please also note: Due to the volume of applications received for positions, it will not be possible to respond to all applications More ❯
XML, DITA, CMS). • Ensure seamless integration with engineering systems (PLM, ERP) and digital twin environments. Governance & Compliance • Establish architecture governance frameworks to ensure consistency, scalability, and compliance. • Define data models, metadata standards, and content lifecycle policies. • Ensure adherence to cybersecurity, regulatory, and quality standards in aerospace. Stakeholder Engagement & Leadership • Collaborate with engineering, product, IT, and documentation teams to … Familiarity with publishing engines viz. XML Professional Publisher, Framemaker Publishing, Oxygen Publishing, etc • Familiarity with business workflow mgmt. tools – BREX, Schematron, Activiti, etc • Working knowledge with logging & monitoring and DataIngestion Transformation Analytics • Familiarity with IETP/IETM tools – Nivomax viewer, RWS LiveContent, CORENA IETP, Pinpoint, etc • Extensive Systems Architecture experience • Effective communication, presentation, and interpersonal skills • Ability More ❯
design, test and deploy AI projects. Azure AI/ML Engineer, key responsibilities: Build, develop and deploy AI applications using Python Design and Develop AI services Setup and develop dataingestion pipelines and components Develop search related components using Azure AI Search Developing and deploying AI/ML models Built and maintain scalable, high-performance AI apps on More ❯
site. Key Requirements Professional experience in the design, maintenance and management of Elastic stacks (Elasticsearch, Logstash, Kibana) Experience of configuring and maintaining large Elastic clusters Experience working with large data sets and elastic indexing best practices. Good understanding on Visualisation components and techniques in Elasticsearch. Proven experience in performance management and tuning of Elasticsearch environment. Strong experience in writing … dataingestion pipelines using Logstash and other big data technologies. Are you interested in this position? If so, then please respond with your CV and I will be in touch ASAP. More ❯
I am working with a client in the education sector who are looking for a data engineer with experience across architect & strategy to join on a part-time 12 month contract.1-2 days per weekFully remoteOutside IR35Immediate start12 month contract Essential Been to school in the UK DataIngestion of APIs GCP based (Google Cloud Platform) Snowflake More ❯
as of 12 months ending December 2024 totaled $13.8 billion. Experience : Minimum 10+ Years Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Specialization in designing and implementing large-scale data pipelines, ETL processes, and distributed systems Should be able to work independenty with minimal help/guidance Good Understanding of Airflow,Data Fusion and Data Flow Strong … Background and experience in Data Ingestions,Transformation,Modeling and Performance tuning. One migration Experience from Cornerstone to GCP will be added advantage Suppport the design and development of BigData echosystem Experience in building complex SQL Queries Strong Communication Skills More ❯
as of 12 months ending December 2024 totaled $13.8 billion. Experience : Minimum 10+ Years Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Specialization in designing and implementing large-scale data pipelines, ETL processes, and distributed systems Should be able to work independenty with minimal help/guidance Good Understanding of Airflow,Data Fusion and Data Flow Strong … Background and experience in Data Ingestions,Transformation,Modeling and Performance tuning. One migration Experience from Cornerstone to GCP will be added advantage Suppport the design and development of BigData echosystem Experience in building complex SQL Queries Strong Communication Skills More ❯