and implement model registries, versioning systems, and experiment tracking to ensure full reproducibility of all model releases. Deploy ML workflows using tools like Airflow or similar, managing dependencies from dataingestion through model deployment and serving. Instrument comprehensive monitoring for model performance, data drift, prediction quality, and system health. Manage infrastructure as code (Terraform, or similar) for … or similar model management systems. Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. … and reproducibility, but you also enable teams to ship fast. Nice to Have Experience delivering API services (FastAPI, SpringBoot or similar). Experience with message brokers and real-time data and event processing (Kafka, Pulsar, or similar). Why Join Us You'll be part of a small, high-output team where intensity and focus are the norm. You More ❯
and implement model registries, versioning systems, and experiment tracking to ensure full reproducibility of all model releases. Deploy ML workflows using tools like Airflow or similar, managing dependencies from dataingestion through model deployment and serving. Instrument comprehensive monitoring for model performance, data drift, prediction quality, and system health. Manage infrastructure as code (Terraform, or similar) for … or similar model management systems. Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. … and reproducibility, but you also enable teams to ship fast. Nice to Have Experience delivering API services (FastAPI, SpringBoot or similar). Experience with message brokers and real-time data and event processing (Kafka, Pulsar, or similar). Why Join Us You'll be part of a small, high-output team where intensity and focus are the norm. You More ❯
london (city of london), south east england, united kingdom
algo1
and implement model registries, versioning systems, and experiment tracking to ensure full reproducibility of all model releases. Deploy ML workflows using tools like Airflow or similar, managing dependencies from dataingestion through model deployment and serving. Instrument comprehensive monitoring for model performance, data drift, prediction quality, and system health. Manage infrastructure as code (Terraform, or similar) for … or similar model management systems. Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. … and reproducibility, but you also enable teams to ship fast. Nice to Have Experience delivering API services (FastAPI, SpringBoot or similar). Experience with message brokers and real-time data and event processing (Kafka, Pulsar, or similar). Why Join Us You'll be part of a small, high-output team where intensity and focus are the norm. You More ❯
and scale reliable, high-performing software in both private and public cloud environments, then the GCI and TRR teams are the perfect fit for you. Here, we focus on Dataingestion, backup, and unified search & Export for our archiving, e-discovery and compliance customers for different data types, as well as delivering Best-in-class user reporting … code. Experience with performance/scalability testing of backend systems and APIs. Experience testing applications that interact with PostgreSQL or similar databases, including writing queries for validation and verifying data integrity. Experience testing applications running in Kubernetes environments. Familiarity with using monitoring and observability tools like Grafana to support test analysis and validation. Experience troubleshooting and supporting customers with … offer of employment will be subject to your successful completion of applicable background checks, conducted in accordance with local law. About Us We save companies the embarrassment of awkward data slip ups by disrupting cybercriminal activity. We think fast, go big and always demand more. We work hard, deliver - and repeat. We grow with meaningful determination. And put success More ❯
Design, develop and deploy secure full stack applications Write clean, test-driven code (TDD/BDD) Integrate frontend components with APIs and backend systems Maintain and enhance core architecture (dataingestion, APIs, storage) Participate in Agile ceremonies (stand-ups, retros, sprint planning) Collaborate with researchers, designers, and technical teams Translate complex user needs into scalable software solutions Tech More ❯
frameworks like LangChain, LangGraph, and the Google Agent Development Kit (ADK). Develop and Evaluate RAG Pipelines: Engineer and optimize end-to-end Retrieval-Augmented Generation (RAG) systems, including dataingestion, chunking strategies, and implementing rigorous pipeline evaluation frameworks for accuracy and performance. Fine-Tune & Optimize LLMs: Implement advanced model customization techniques, including PEFT (Parameter-Efficient Fine-Tuning More ❯
frameworks like LangChain, LangGraph, and the Google Agent Development Kit (ADK). Develop and Evaluate RAG Pipelines: Engineer and optimize end-to-end Retrieval-Augmented Generation (RAG) systems, including dataingestion, chunking strategies, and implementing rigorous pipeline evaluation frameworks for accuracy and performance. Fine-Tune & Optimize LLMs: Implement advanced model customization techniques, including PEFT (Parameter-Efficient Fine-Tuning More ❯
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
london (city of london), south east england, united kingdom
HCLTech
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
building knowledge graphs Familiarity with the latest Generative AI developments such as LLM architectures, fine-tuning strategies, Agentic workflows Experience in observability tooling for distributed AI systems. Understanding of dataingestion and transformation pipelines supporting vector and knowledge graph stores. Proven ability to own feature delivery end-to-end. Strong front-end development expertise is essential, with proven More ❯
ELK SME Extension Professional experience in the design, maintenance and management of Elastic stacks (Elasticsearch, Logstash, Kibana) Experience of configuring and maintaining large Elastic cluster Experience working with large data sets and elastic indexing best practices. Good understanding of Visualisation components and techniques in Elasticsearch. Proven experience in performance management and tuning of Elasticsearchenvironment. Strong experience in writing dataingestion pipelines using Logstash and other big data technologies. More ❯
ELK SME Extension Professional experience in the design, maintenance and management of Elastic stacks (Elasticsearch, Logstash, Kibana) Experience of configuring and maintaining large Elastic cluster Experience working with large data sets and elastic indexing best practices. Good understanding of Visualisation components and techniques in Elasticsearch. Proven experience in performance management and tuning of Elasticsearchenvironment. Strong experience in writing dataingestion pipelines using Logstash and other big data technologies. More ❯
ELK SME Extension · Professional experience in the design, maintenance and management of Elastic stacks (Elasticsearch, Logstash, Kibana) · Experience of configuring and maintaining large Elastic cluster · Experience working with large data sets and elastic indexing best practices. · Good understanding of Visualisation components and techniques in Elasticsearch. · Proven experience in performance management and tuning of Elasticsearch environment. · Strong experience in writing … dataingestion pipelines using Logstash and other big data technologies. More ❯
Edinburgh, Midlothian, Scotland, United Kingdom Hybrid / WFH Options
Atrium Workforce Solutions Ltd
ELK SME Extension Professional experience in the design, maintenance and management of Elastic stacks (Elasticsearch, Logstash, Kibana) Experience of configuring and maintaining large Elastic clusters Experience working with large data sets and elastic indexing best practices. Good understanding on Visualisation components and techniques in Elasticsearch. Proven experience in performance management and tuning of Elasticsearch environment. Strong experience in writing … dataingestion pipelines using Logstash and other big data technologies. Please feel free to contact myself – Daisy Nguyen at Gibbs Consulting/Atrium UK for a confidential chat to know more details about the role. Please also note: Due to the volume of applications received for positions, it will not be possible to respond to all applications More ❯
chain transaction flows. Role Overview The Senior Backend Engineer will contribute to the design, development, and optimization of core blockchain infrastructure systems, with a particular focus on Ethereum transaction data and Geth integration. The successful candidate will work closely with a globally distributed engineering team to deliver highly performant and reliable backend services supporting real-time transaction analytics. Responsibilities … Design and implement backend services for mempool and transaction dataingestion, processing, and visualization. Work on Geth integration and extend core Ethereum node functionality. Develop and maintain APIs and services enabling real-time transaction insight. Optimize system performance, reliability, and scalability for large-scale data environments. Collaborate with cross-functional teams to support research, product, and infrastructure … P2P networks , transaction propagation , and blockchain consensus mechanisms. Experience designing and maintaining distributed backend systems. Proficiency in Go , Rust , or similar system-level languages. Strong grasp of software architecture, data pipelines, and observability practices. Excellent communication skills and ability to work effectively in a remote, collaborative environment. What’s Offered Competitive base compensation and meaningful equity. Fully remote role More ❯
XML, DITA, CMS). • Ensure seamless integration with engineering systems (PLM, ERP) and digital twin environments. Governance & Compliance • Establish architecture governance frameworks to ensure consistency, scalability, and compliance. • Define data models, metadata standards, and content lifecycle policies. • Ensure adherence to cybersecurity, regulatory, and quality standards in aerospace. Stakeholder Engagement & Leadership • Collaborate with engineering, product, IT, and documentation teams to … Familiarity with publishing engines viz. XML Professional Publisher, Framemaker Publishing, Oxygen Publishing, etc • Familiarity with business workflow mgmt. tools – BREX, Schematron, Activiti, etc • Working knowledge with logging & monitoring and DataIngestion Transformation Analytics • Familiarity with IETP/IETM tools – Nivomax viewer, RWS LiveContent, CORENA IETP, Pinpoint, etc • Extensive Systems Architecture experience • Effective communication, presentation, and interpersonal skills • Ability More ❯
design, test and deploy AI projects. Azure AI/ML Engineer, key responsibilities: Build, develop and deploy AI applications using Python Design and Develop AI services Setup and develop dataingestion pipelines and components Develop search related components using Azure AI Search Developing and deploying AI/ML models Built and maintain scalable, high-performance AI apps on More ❯
design, test and deploy AI projects. Azure AI/ML Engineer, key responsibilities: Build, develop and deploy AI applications using Python Design and Develop AI services Setup and develop dataingestion pipelines and components Develop search related components using Azure AI Search Developing and deploying AI/ML models Built and maintain scalable, high-performance AI apps on More ❯
site. Key Requirements Professional experience in the design, maintenance and management of Elastic stacks (Elasticsearch, Logstash, Kibana) Experience of configuring and maintaining large Elastic clusters Experience working with large data sets and elastic indexing best practices. Good understanding on Visualisation components and techniques in Elasticsearch. Proven experience in performance management and tuning of Elasticsearch environment. Strong experience in writing … dataingestion pipelines using Logstash and other big data technologies. Are you interested in this position? If so, then please respond with your CV and I will be in touch ASAP. More ❯
site. Key Requirements Professional experience in the design, maintenance and management of Elastic stacks (Elasticsearch, Logstash, Kibana) Experience of configuring and maintaining large Elastic clusters Experience working with large data sets and elastic indexing best practices. Good understanding on Visualisation components and techniques in Elasticsearch. Proven experience in performance management and tuning of Elasticsearch environment. Strong experience in writing … dataingestion pipelines using Logstash and other big data technologies. Are you interested in this position? If so, then please respond with your CV and I will be in touch ASAP. More ❯
prominent organisation in the public health sector, is dedicated to fostering health security and responding effectively to public health emergencies. With a focus on pathogen modelling, genomic sequencing, and data analytics, they are at the forefront of critical initiatives shaping national health standards. Role Summary: Our client seeks an HPC Engineer with cluster experience to support critical public health … Slurm, Grid Engine, IBM) and tune MPI-based applications for genomic and health modelling tasks. Conduct security assessments and deploy compliant systems using SIEM tools (eg, Splunk). Oversee dataingestion/backups for petabyte-scale health datasets and perform performance tests (eg, Linpack). Respond to urgent outages during health crises and support researchers with documentation and More ❯
I am working with a client in the education sector who are looking for a data engineer with experience across architect & strategy to join on a part-time 12 month contract.1-2 days per weekFully remoteOutside IR35Immediate start12 month contract Essential Been to school in the UK DataIngestion of APIs GCP based (Google Cloud Platform) Snowflake More ❯