Gloucester, Gloucestershire, South West, United Kingdom Hybrid / WFH Options
NSD
hybrid working when possible Must hold active Enhanced DV Clearance (West) Competitive Salary DOE - 6% bonus, 25 days holiday, clearance bonus Experience in Data Pipelines, ETL processing, Data Integration, Apache, SQL/NoSQL Who Are We? Our client is a trusted and growing supplier to the National Security sector, delivering mission-critical solutions that help keep the nation safe … maintain optimal operation. The Data Engineer Should Have: Active eDV clearance (West) Willingness to work full time on site in Gloucester when required. Required technical experience in the following: Apache Kafka Apache NiFI SQL and noSQL databases (e.g. MongoDB) ETL processing languages such as Groovy, Python or Java To be Considered: Please either apply by clicking online or … hearing from you. KEY SKILLS: DATA ENGINEER/DATA ENGINEERING/DEFENCE/NATIONAL SECURITY/DATA STRATEGY/DATA PIPELINES/DATA GOVERNANCE/SQL/NOSQL/APACHE/NIFI/KAFKA/ETL/GLOUCESTER/DV/SECURITY CLEARED/DV CLEARANCE More ❯
Gloucester, England, United Kingdom Hybrid / WFH Options
Searchability NS&D
hybrid working when possible Must hold active Enhanced DV Clearance (West) Competitive Salary DOE - 6% bonus, 25 days holiday, clearance bonus Experience in Data Pipelines, ETL processing, Data Integration, Apache, SQL/NoSQL Who Are We? Our client is a trusted and growing supplier to the National Security sector, delivering mission-critical solutions that help keep the nation safe … maintain optimal operation. The Data Engineer Should Have: Active eDV clearance (West) Willingness to work full-time on-site in Gloucester when required. Required technical experience in the following: Apache Kafka Apache NiFI SQL and noSQL databases (e.g. MongoDB) ETL processing languages such as Groovy, Python or Java To be Considered: Please either apply by clicking online or … hearing from you. KEY SKILLS: DATA ENGINEER/DATA ENGINEERING/DEFENCE/NATIONAL SECURITY/DATA STRATEGY/DATA PIPELINES/DATA GOVERNANCE/SQL/NOSQL/APACHE/NIFI/KAFKA/ETL/GLOUCESTER/DV/SECURITY CLEARED/DV CLEARANCE More ❯
cheltenham, south west england, united kingdom Hybrid / WFH Options
Searchability NS&D
hybrid working when possible Must hold active Enhanced DV Clearance (West) Competitive Salary DOE - 6% bonus, 25 days holiday, clearance bonus Experience in Data Pipelines, ETL processing, Data Integration, Apache, SQL/NoSQL Who Are We? Our client is a trusted and growing supplier to the National Security sector, delivering mission-critical solutions that help keep the nation safe … maintain optimal operation. The Data Engineer Should Have: Active eDV clearance (West) Willingness to work full-time on-site in Gloucester when required. Required technical experience in the following: Apache Kafka Apache NiFI SQL and noSQL databases (e.g. MongoDB) ETL processing languages such as Groovy, Python or Java To be Considered: Please either apply by clicking online or … hearing from you. KEY SKILLS: DATA ENGINEER/DATA ENGINEERING/DEFENCE/NATIONAL SECURITY/DATA STRATEGY/DATA PIPELINES/DATA GOVERNANCE/SQL/NOSQL/APACHE/NIFI/KAFKA/ETL/GLOUCESTER/DV/SECURITY CLEARED/DV CLEARANCE More ❯
location – full time on site when required Must hold active Enhanced DV Clearance (West) Circa £700 p/d inside IR35 Experience in Data Pipelines, ETL processing, Data Integration, Apache, SQL/NoSQL Who Are We? Our client is a trusted and growing supplier to the National Security sector, delivering mission-critical solutions that help keep the nation safe … maintain optimal operation. The Data Engineer Should Have: Active eDV clearance (West) Willingness to work full-time on-site in Gloucester when required. Required technical experience in the following: Apache Kafka Apache NiFI SQL and noSQL databases (e.g. MongoDB) ETL processing languages such as Groovy, Python or Java Desirable skills: Java Docker Kubernetes Grafana/Prometheus Integration/… hearing from you. KEY SKILLS: DATA ENGINEER/DATA ENGINEERING/DEFENCE/NATIONAL SECURITY/DATA STRATEGY/DATA PIPELINES/DATA GOVERNANCE/SQL/NOSQL/APACHE/NIFI/KAFKA/ETL/GLOUCESTER/DV/SECURITY CLEARED/DV CLEARANCE More ❯
quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
london (city of london), south east england, united kingdom
Hadte Group
quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
Responsibilities: Design and implement data lakehouse solutions on AWS using Medallion Architecture (Bronze/Silver/Gold layers). Build and optimize real-time and batch data pipelines leveraging Apache Spark, Kafka, and AWS Glue/EMR. Architect storage and processing layers using Parquet and Iceberg for schema evolution, partitioning, and performance optimization. Integrate AWS data services (S3, Redshift … guidance to engineering teams. Required Skills & Experience: Core Technical Expertise Strong hands-on skills in AWS Data Services (S3, Redshift, Glue, EMR, Kinesis, Lake Formation, DynamoDB). Expertise in Apache Kafka (event streaming) and Apache Spark (batch and streaming). Proficiency in Python for data engineering and automation. Strong knowledge of Parquet, Iceberg, and Medallion Architecture. Finance & Capital More ❯
variety of tool sets and data sources. Data Architecture experience with and understanding of data lakes, warehouses, and/or streaming platforms. Data Engineering experience with tooling, such as Apache Spark and Kafka, and orchestration tools like Apache Airflow or equivalent. Continuous Integration/Continuous Deployment experience with CI/CD tools like Jenkins or GitLab tailored for More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
london (city of london), south east england, united kingdom
Infosys
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
data ingestion, Databricks for ETL , modelling data for Power BI, and working closely with stakeholders to create products that drive smarter decisions. Building and optimising data ingestion pipelines using Apache Spark (ideally in Azure Databricks) Collaborating across teams to understand requirements and deliver fit-for-purpose data products Supporting the productionisation of ML pipelines Working in an Agile/… services and DevOps (CI/CD) Knowledge of data modelling (Star Schema) and Power BI Bonus points for: Real-time data pipeline experience Azure Data Engineer certifications Familiarity with Apache Kafka or unstructured data (e.g. voice) Ready to shape the future of data at Reassured? If you're excited by the idea of building smart, scalable solutions that make More ❯
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: • Lead the migration of existing AWS-based data pipelines to Databricks. • Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. • Optimize performance and cost-efficiency of Databricks workloads. • Develop and maintain … best practices for data governance, security, and access control within Databricks. • Provide technical mentorship and guidance to junior engineers. Must-Have Skills: • Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). • Proven track record of building and optimizing data pipelines in cloud environments. • Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM More ❯
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: • Lead the migration of existing AWS-based data pipelines to Databricks. • Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. • Optimize performance and cost-efficiency of Databricks workloads. • Develop and maintain … best practices for data governance, security, and access control within Databricks. • Provide technical mentorship and guidance to junior engineers. Must-Have Skills: • Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). • Proven track record of building and optimizing data pipelines in cloud environments. • Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM More ❯
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: • Lead the migration of existing AWS-based data pipelines to Databricks. • Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. • Optimize performance and cost-efficiency of Databricks workloads. • Develop and maintain … best practices for data governance, security, and access control within Databricks. • Provide technical mentorship and guidance to junior engineers. Must-Have Skills: • Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). • Proven track record of building and optimizing data pipelines in cloud environments. • Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM More ❯
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain … best practices for data governance, security, and access control within Databricks. Provide technical mentorship and guidance to junior engineers. Must-Have Skills: Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). Proven track record of building and optimizing data pipelines in cloud environments. Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM More ❯
Edinburgh, Midlothian, United Kingdom Hybrid / WFH Options
Aberdeen Group
API-driven architectures. Oversee data governance initiatives including metadata management, data quality, and master data management (MDM). Evaluate and integrate big data technologies and streaming platforms such as Apache Kafka and Apache Spark. Collaborate with cross-functional teams to align data architecture with business goals and technical requirements. About the candidate Exceptional stakeholder engagement, communication, and organisational More ❯
data modeling (star schema, snowflake schema). Version Control Practical experience with Git (branching, merging, pull requests). Preferred Qualifications (A Plus) Experience with a distributed computing framework like Apache Spark (using PySpark). Familiarity with cloud data services ( AWS S3/Redshift, Azure Data Lake/Synapse, or Google BigQuery/Cloud Storage ). Exposure to workflow orchestration … tools ( Apache Airflow, Prefect, or Dagster ). Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. More ❯
streaming architectures, to support advanced analytics, AI, and business intelligence use cases. Proven experience in designing architectures for structured, semi-structured, and unstructured data , leveraging technologies like Databricks, Snowflake, Apache Kafka , and Delta Lake to enable seamless data processing and analytics. Hands-on experience in data integration , including designing and optimising data pipelines (batch and streaming) and integrating cloud … based platforms (e.g., Azure Synapse, AWS Redshift, Google BigQuery ) with legacy systems, ensuring performance and scalability. Deep knowledge of ETL/ELT processes , leveraging tools like Apache Airflow, dbt, or Informatica , with a focus on ensuring data quality, lineage, and integrity across the data lifecycle. Practical expertise in data and AI governance , including implementing frameworks for data privacy, ethical More ❯
to deliver secure, efficient, and maintainable software solutions. • Implement and manage cloud infrastructure using AWS services. • Automate deployment and infrastructure provisioning using Terraform or Ansible. • Optimize application performance using Apache Spark for data processing where required. • Write clean, efficient, and maintainable code following best coding practices. • Troubleshoot, debug, and resolve complex technical issues in production and development environments. • Work … RDS, etc.). • Proficiency in Terraform or Ansible for infrastructure automation. • Working knowledge of Angular or similar UI frameworks. • Solid understanding of SQL and relational database design. • Experience with Apache Spark for distributed data processing (preferred). • Strong problem-solving, analytical, and debugging skills. • Excellent communication and teamwork abilities. Nice to Have • Experience in CI/CD pipelines, Docker More ❯