quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
london (city of london), south east england, united kingdom
Hadte Group
quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
london (city of london), south east england, united kingdom
Infosys
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
data integrity, consistency, and accuracy across systems. Optimize data infrastructure for performance, cost efficiency, and scalability in cloud environments. Develop and manage graph-based data systems (e.g. Kuzu, Neo4j, Apache AGE) to model and query complex relationships in support of Retrieval Augmented Generation (RAG) and agentic architectures. Contribute to text retrieval pipelines involving vector embeddings and knowledge graphs, for … workflows. Proficiency with cloud platforms such as Azure, AWS, or GCP and their managed data services. Desirable: Experience with asynchronous python programming Experience with graph technologies (e.g., Kuzu, Neo4j, Apache AGE). Familiarity with embedding models (hosted or local): OpenAI, Cohere etc or HuggingFace models/sentence-transformers. Solid understanding of data modeling, warehousing, and performance optimization. Experience with … messaging middleware + streaming (e.g. NATS Jetstream, Redis Streams, Apache Kafka or Pulsar etc.) Hands-on experience with data lakes, lakehouses, or components of the modern data stack. Exposure to MLOps tools and best practices. Exposure to workflow orchestration frameworks (e.g. Metaflow, Airflow, Dagster) Exposure to Kubernetes Experience working with unstructured data (e.g., logs, documents, images). Awareness of More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Futuria
data integrity, consistency, and accuracy across systems. Optimize data infrastructure for performance, cost efficiency, and scalability in cloud environments. Develop and manage graph-based data systems (e.g. Kuzu, Neo4j, Apache AGE) to model and query complex relationships in support of Retrieval Augmented Generation (RAG) and agentic architectures. Contribute to text retrieval pipelines involving vector embeddings and knowledge graphs, for … workflows. Proficiency with cloud platforms such as Azure, AWS, or GCP and their managed data services. Desirable: Experience with asynchronous python programming Experience with graph technologies (e.g., Kuzu, Neo4j, Apache AGE). Familiarity with embedding models (hosted or local): OpenAI, Cohere etc or HuggingFace models/sentence-transformers. Solid understanding of data modeling, warehousing, and performance optimization. Experience with … messaging middleware + streaming (e.g. NATS Jetstream, Redis Streams, Apache Kafka or Pulsar etc.) Hands-on experience with data lakes, lakehouses, or components of the modern data stack. Exposure to MLOps tools and best practices. Exposure to workflow orchestration frameworks (e.g. Metaflow, Airflow, Dagster) Exposure to Kubernetes Experience working with unstructured data (e.g., logs, documents, images). Awareness of More ❯
london, south east england, united kingdom Hybrid / WFH Options
Futuria
data integrity, consistency, and accuracy across systems. Optimize data infrastructure for performance, cost efficiency, and scalability in cloud environments. Develop and manage graph-based data systems (e.g. Kuzu, Neo4j, Apache AGE) to model and query complex relationships in support of Retrieval Augmented Generation (RAG) and agentic architectures. Contribute to text retrieval pipelines involving vector embeddings and knowledge graphs, for … workflows. Proficiency with cloud platforms such as Azure, AWS, or GCP and their managed data services. Desirable: Experience with asynchronous python programming Experience with graph technologies (e.g., Kuzu, Neo4j, Apache AGE). Familiarity with embedding models (hosted or local): OpenAI, Cohere etc or HuggingFace models/sentence-transformers. Solid understanding of data modeling, warehousing, and performance optimization. Experience with … messaging middleware + streaming (e.g. NATS Jetstream, Redis Streams, Apache Kafka or Pulsar etc.) Hands-on experience with data lakes, lakehouses, or components of the modern data stack. Exposure to MLOps tools and best practices. Exposure to workflow orchestration frameworks (e.g. Metaflow, Airflow, Dagster) Exposure to Kubernetes Experience working with unstructured data (e.g., logs, documents, images). Awareness of More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Futuria
data integrity, consistency, and accuracy across systems. Optimize data infrastructure for performance, cost efficiency, and scalability in cloud environments. Develop and manage graph-based data systems (e.g. Kuzu, Neo4j, Apache AGE) to model and query complex relationships in support of Retrieval Augmented Generation (RAG) and agentic architectures. Contribute to text retrieval pipelines involving vector embeddings and knowledge graphs, for … workflows. Proficiency with cloud platforms such as Azure, AWS, or GCP and their managed data services. Desirable: Experience with asynchronous python programming Experience with graph technologies (e.g., Kuzu, Neo4j, Apache AGE). Familiarity with embedding models (hosted or local): OpenAI, Cohere etc or HuggingFace models/sentence-transformers. Solid understanding of data modeling, warehousing, and performance optimization. Experience with … messaging middleware + streaming (e.g. NATS Jetstream, Redis Streams, Apache Kafka or Pulsar etc.) Hands-on experience with data lakes, lakehouses, or components of the modern data stack. Exposure to MLOps tools and best practices. Exposure to workflow orchestration frameworks (e.g. Metaflow, Airflow, Dagster) Exposure to Kubernetes Experience working with unstructured data (e.g., logs, documents, images). Awareness of More ❯
Role – Technology Architect/Confluent Solution Architect Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … these values are upheld only because of our people. Your role As a Confluent Solution Architect, you will lead the design and architecture of enterprise-grade streaming solutions using Apache Kafka and the Confluent Platform. You will work closely with clients to understand business requirements, define integration strategies, and guide implementation teams in building scalable, secure, and resilient data … streaming ecosystems. Strongly Preferred: • Experience in designing and architecting solutions using Apache Kafka, with hands-on experience in Confluent Kafka • Ability to lead client engagements, translate business requirements into technical solutions, and guide implementation teams • Deep understanding of Kafka internals, KRaft architecture, and Confluent components • Experience with Confluent Cloud, Stream Governance, Data Lineage, and RBAC • Expertise in stream processing More ❯
Role – Technology Architect/Confluent Solution Architect Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … these values are upheld only because of our people. Your role As a Confluent Solution Architect, you will lead the design and architecture of enterprise-grade streaming solutions using Apache Kafka and the Confluent Platform. You will work closely with clients to understand business requirements, define integration strategies, and guide implementation teams in building scalable, secure, and resilient data … streaming ecosystems. Strongly Preferred: • Experience in designing and architecting solutions using Apache Kafka, with hands-on experience in Confluent Kafka • Ability to lead client engagements, translate business requirements into technical solutions, and guide implementation teams • Deep understanding of Kafka internals, KRaft architecture, and Confluent components • Experience with Confluent Cloud, Stream Governance, Data Lineage, and RBAC • Expertise in stream processing More ❯
Role – Technology Architect/Confluent Solution Architect Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … these values are upheld only because of our people. Your role As a Confluent Solution Architect, you will lead the design and architecture of enterprise-grade streaming solutions using Apache Kafka and the Confluent Platform. You will work closely with clients to understand business requirements, define integration strategies, and guide implementation teams in building scalable, secure, and resilient data … streaming ecosystems. Strongly Preferred: • Experience in designing and architecting solutions using Apache Kafka, with hands-on experience in Confluent Kafka • Ability to lead client engagements, translate business requirements into technical solutions, and guide implementation teams • Deep understanding of Kafka internals, KRaft architecture, and Confluent components • Experience with Confluent Cloud, Stream Governance, Data Lineage, and RBAC • Expertise in stream processing More ❯
london (city of london), south east england, united kingdom
Infosys
Role – Technology Architect/Confluent Solution Architect Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … these values are upheld only because of our people. Your role As a Confluent Solution Architect, you will lead the design and architecture of enterprise-grade streaming solutions using Apache Kafka and the Confluent Platform. You will work closely with clients to understand business requirements, define integration strategies, and guide implementation teams in building scalable, secure, and resilient data … streaming ecosystems. Strongly Preferred: • Experience in designing and architecting solutions using Apache Kafka, with hands-on experience in Confluent Kafka • Ability to lead client engagements, translate business requirements into technical solutions, and guide implementation teams • Deep understanding of Kafka internals, KRaft architecture, and Confluent components • Experience with Confluent Cloud, Stream Governance, Data Lineage, and RBAC • Expertise in stream processing More ❯
contract assignment. Key Requirements: Proven background in AI and data development Strong proficiency in Python , including data-focused libraries such as Pandas, NumPy, and PySpark Hands-on experience with Apache Spark (PySpark preferred) Solid understanding of data management and processing pipelines Experience in algorithm development and graph data structures is advantageous Active SC Clearance is mandatory Role Overview: You … will play a key role in developing and delivering advanced AI solutions for a Government client . Responsibilities include: Designing, building, and maintaining data processing pipelines using Apache Spark Implementing ETL/ELT workflows for large-scale data sets Developing and optimising Python-based data ingestion tools Collaborating on the design and deployment of machine learning models Ensuring data More ❯
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
london, south east england, united kingdom Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
london (city of london), south east england, united kingdom
Capgemini
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯