Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
london (city of london), south east england, united kingdom
Infosys
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain … best practices for data governance, security, and access control within Databricks. Provide technical mentorship and guidance to junior engineers. Must-Have Skills: Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). Proven track record of building and optimizing data pipelines in cloud environments. Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM More ❯
Edinburgh, Midlothian, United Kingdom Hybrid / WFH Options
Aberdeen Group
API-driven architectures. Oversee data governance initiatives including metadata management, data quality, and master data management (MDM). Evaluate and integrate big data technologies and streaming platforms such as Apache Kafka and Apache Spark. Collaborate with cross-functional teams to align data architecture with business goals and technical requirements. About the candidate Exceptional stakeholder engagement, communication, and organisational More ❯
london, south east england, united kingdom Hybrid / WFH Options
Futuria
data integrity, consistency, and accuracy across systems. Optimize data infrastructure for performance, cost efficiency, and scalability in cloud environments. Develop and manage graph-based data systems (e.g. Kuzu, Neo4j, Apache AGE) to model and query complex relationships in support of Retrieval Augmented Generation (RAG) and agentic architectures. Contribute to text retrieval pipelines involving vector embeddings and knowledge graphs, for … workflows. Proficiency with cloud platforms such as Azure, AWS, or GCP and their managed data services. Desirable: Experience with asynchronous python programming Experience with graph technologies (e.g., Kuzu, Neo4j, Apache AGE). Familiarity with embedding models (hosted or local): OpenAI, Cohere etc or HuggingFace models/sentence-transformers. Solid understanding of data modeling, warehousing, and performance optimization. Experience with … messaging middleware + streaming (e.g. NATS Jetstream, Redis Streams, Apache Kafka or Pulsar etc.) Hands-on experience with data lakes, lakehouses, or components of the modern data stack. Exposure to MLOps tools and best practices. Exposure to workflow orchestration frameworks (e.g. Metaflow, Airflow, Dagster) Exposure to Kubernetes Experience working with unstructured data (e.g., logs, documents, images). Awareness of More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Futuria
data integrity, consistency, and accuracy across systems. Optimize data infrastructure for performance, cost efficiency, and scalability in cloud environments. Develop and manage graph-based data systems (e.g. Kuzu, Neo4j, Apache AGE) to model and query complex relationships in support of Retrieval Augmented Generation (RAG) and agentic architectures. Contribute to text retrieval pipelines involving vector embeddings and knowledge graphs, for … workflows. Proficiency with cloud platforms such as Azure, AWS, or GCP and their managed data services. Desirable: Experience with asynchronous python programming Experience with graph technologies (e.g., Kuzu, Neo4j, Apache AGE). Familiarity with embedding models (hosted or local): OpenAI, Cohere etc or HuggingFace models/sentence-transformers. Solid understanding of data modeling, warehousing, and performance optimization. Experience with … messaging middleware + streaming (e.g. NATS Jetstream, Redis Streams, Apache Kafka or Pulsar etc.) Hands-on experience with data lakes, lakehouses, or components of the modern data stack. Exposure to MLOps tools and best practices. Exposure to workflow orchestration frameworks (e.g. Metaflow, Airflow, Dagster) Exposure to Kubernetes Experience working with unstructured data (e.g., logs, documents, images). Awareness of More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Futuria
data integrity, consistency, and accuracy across systems. Optimize data infrastructure for performance, cost efficiency, and scalability in cloud environments. Develop and manage graph-based data systems (e.g. Kuzu, Neo4j, Apache AGE) to model and query complex relationships in support of Retrieval Augmented Generation (RAG) and agentic architectures. Contribute to text retrieval pipelines involving vector embeddings and knowledge graphs, for … workflows. Proficiency with cloud platforms such as Azure, AWS, or GCP and their managed data services. Desirable: Experience with asynchronous python programming Experience with graph technologies (e.g., Kuzu, Neo4j, Apache AGE). Familiarity with embedding models (hosted or local): OpenAI, Cohere etc or HuggingFace models/sentence-transformers. Solid understanding of data modeling, warehousing, and performance optimization. Experience with … messaging middleware + streaming (e.g. NATS Jetstream, Redis Streams, Apache Kafka or Pulsar etc.) Hands-on experience with data lakes, lakehouses, or components of the modern data stack. Exposure to MLOps tools and best practices. Exposure to workflow orchestration frameworks (e.g. Metaflow, Airflow, Dagster) Exposure to Kubernetes Experience working with unstructured data (e.g., logs, documents, images). Awareness of More ❯
data solutions using the Databricks platform. Key Responsibilities: . Lead the migration of existing AWS-based data pipelines to Databricks. . Design and implement scalable data engineering solutions using Apache Spark on Databricks. . Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. . Optimize performance and cost-efficiency of Databricks workloads. . … for data governance, security, and access control within Databricks. . Provide technical mentorship and guidance to junior engineers. Must-Have Skills: . Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). . Proven track record of building and optimizing data pipelines in cloud environments. . Experience with AWS services such as S3, Glue, Lambda, Step Functions More ❯
a key role in ensuring that all data systems comply with industry regulations and security standards while enabling efficient access for analytics and operational teams. A strong command of Apache NiFi is essential for this role. You will be expected to design, implement, and maintain data flows using NiFi, ensuring accurate, efficient, and secure data ingestion, transformation, and delivery. … business needs and compliance requirements. Maintain documentation of data flows and processes, ensuring knowledge sharing and operational transparency. Skills & Experience: You will have the following skills or proven experience: Apache NiFi Expertise: Deep understanding of core NiFi concepts: FlowFiles, Processors, Controller Services, Schedulers, Web UI. Experience designing and optimizing data flows for batch, real-time streaming, and event-driven More ❯
a key role in ensuring that all data systems comply with industry regulations and security standards while enabling efficient access for analytics and operational teams. A strong command of Apache NiFi is essential for this role. You will be expected to design, implement, and maintain data flows using NiFi, ensuring accurate, efficient, and secure data ingestion, transformation, and delivery. … business needs and compliance requirements. Maintain documentation of data flows and processes, ensuring knowledge sharing and operational transparency. Skills & Experience: You will have the following skills or proven experience: Apache NiFi Expertise: Deep understanding of core NiFi concepts: FlowFiles, Processors, Controller Services, Schedulers, Web UI. Experience designing and optimizing data flows for batch, real-time streaming, and event-driven More ❯
Role – Technology Architect/Confluent Solution Architect Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … these values are upheld only because of our people. Your role As a Confluent Solution Architect, you will lead the design and architecture of enterprise-grade streaming solutions using Apache Kafka and the Confluent Platform. You will work closely with clients to understand business requirements, define integration strategies, and guide implementation teams in building scalable, secure, and resilient data … streaming ecosystems. Strongly Preferred: • Experience in designing and architecting solutions using Apache Kafka, with hands-on experience in Confluent Kafka • Ability to lead client engagements, translate business requirements into technical solutions, and guide implementation teams • Deep understanding of Kafka internals, KRaft architecture, and Confluent components • Experience with Confluent Cloud, Stream Governance, Data Lineage, and RBAC • Expertise in stream processing More ❯
london (city of london), south east england, united kingdom
Infosys
Role – Technology Architect/Confluent Solution Architect Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … these values are upheld only because of our people. Your role As a Confluent Solution Architect, you will lead the design and architecture of enterprise-grade streaming solutions using Apache Kafka and the Confluent Platform. You will work closely with clients to understand business requirements, define integration strategies, and guide implementation teams in building scalable, secure, and resilient data … streaming ecosystems. Strongly Preferred: • Experience in designing and architecting solutions using Apache Kafka, with hands-on experience in Confluent Kafka • Ability to lead client engagements, translate business requirements into technical solutions, and guide implementation teams • Deep understanding of Kafka internals, KRaft architecture, and Confluent components • Experience with Confluent Cloud, Stream Governance, Data Lineage, and RBAC • Expertise in stream processing More ❯
Cheltenham, England, United Kingdom Hybrid / WFH Options
Searchability NS&D
active (West) Globally leading defence/cyber security company Up to £65k DoE - plus benefits and bonuses Cheltenham location – hybrid working model Experience required in Splunk/ELK, Linux, Apache NiFi, Java/Python, Docker/Kubernetes Who Are We? We are recruiting a Senior Support Engineer to work with a multi-national, industry-leading cyber security/defence … with tools like Splunk or the ELK stack. Strong ability to manage tasks proactively while adapting to shifting priorities. Proficiency in Linux server administration. Experience with technologies such as Apache NiFi, MinIO, and AWS S3. Skilled in managing and patching Java and Python applications. Familiarity with containerization tools like Docker or Podman and deployment platforms such as Kubernetes or … hearing from you. SENIOR SUPPORT ENGINEER KEY SKILLS: SUPPORT ENGINEER/LINUX/UNIX/AWS/DOCKER/KUBERNETES/PYTHON/ANSIBLE/JAVA/ELK/APACHE/SPLUNK/APACHE NIFI/DV CLEARED/DV CLEARANCE/DEVELOPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/CHELTENHAM/SECURITY CLEARED More ❯
gloucester, south west england, united kingdom Hybrid / WFH Options
Searchability NS&D
active (West) Globally leading defence/cyber security company Up to £65k DoE - plus benefits and bonuses Cheltenham location – hybrid working model Experience required in Splunk/ELK, Linux, Apache NiFi, Java/Python, Docker/Kubernetes Who Are We? We are recruiting a Senior Support Engineer to work with a multi-national, industry-leading cyber security/defence … with tools like Splunk or the ELK stack. Strong ability to manage tasks proactively while adapting to shifting priorities. Proficiency in Linux server administration. Experience with technologies such as Apache NiFi, MinIO, and AWS S3. Skilled in managing and patching Java and Python applications. Familiarity with containerization tools like Docker or Podman and deployment platforms such as Kubernetes or … hearing from you. SENIOR SUPPORT ENGINEER KEY SKILLS: SUPPORT ENGINEER/LINUX/UNIX/AWS/DOCKER/KUBERNETES/PYTHON/ANSIBLE/JAVA/ELK/APACHE/SPLUNK/APACHE NIFI/DV CLEARED/DV CLEARANCE/DEVELOPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/CHELTENHAM/SECURITY CLEARED More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
london, south east england, united kingdom Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯