and understanding of secure development practices include OWASP guidelines/top 10, SOC 2, and NCSC cloud security principles. Experience in data and orchestration tools including some of dbt, Apache Airflow, Azure Data Factory. Experience in programming languages including some of Python, Typescript, Javascript, R, Java, C#, producing services, APIs, Function Apps or Lambdas. The use of various standard More ❯
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, Apache Flink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI More ❯
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, Apache Flink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI More ❯
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, Apache Flink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI More ❯
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, Apache Flink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI More ❯
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, Apache Flink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI More ❯
london (city of london), south east england, united kingdom
Response Informatics
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, Apache Flink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI More ❯
Azure, AWS, GCP) Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, Apache Flink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery … Expertise in building data architectures that support batch and streaming paradigms Experience with standards such as JSON, XML, YAML, Avro, Parquet Strong communication skills Open to learning new technologies, methodologies, and skills As the successful Data Engineering Manager you will be responsible for: Building and maintaining data pipelines Identifying and patching issues and bugs identified in the pipeline/ More ❯
Knowledge of Data Management technologies such as Relational and Columnar Databases, Data Integration (ETL), or API development. Familiarity with data formats like JSON, XML, and binary formats such as Avro or Google Protocol Buffers. Experience working with business and technical teams to develop Model Engineering solutions. Proficiency with tools like SQL, JavaScript, or Python for data analysis. Strong communication More ❯
Engineers to guide teams in building and maintaining complex data pipelines. These pipelines ingest and transform data from diverse sources (e.g., email, CSV, ODBC/JDBC, JSON, XML, Excel, Avro, Parquet) using AWS technologies such as S3, Athena, Redshift, Glue, and programming languages like Python and Java (Docker/Spring) . What You’ll Do Lead the design and More ❯
Leeds (2days/week Onsite) Duration: 06+ Months Job Summary: We are looking for an experienced Kafka Architect to design and implement scalable, high-throughput data streaming solutions using Apache Kafka on Cloudera (on-premises). The ideal candidate will have a strong background in distributed systems, data pipelines, and real-time data processing. Key Responsibilities: -Design and implement … needs. -Automate deployment and configuration using tools like Ansible, Terraform, or Cloudera Manager. -Provide technical leadership and mentorship to junior team members. Required Skills: -Strong hands-on experience with Apache Kafka (including Kafka Connect, Kafka Streams). -Experience with Cloudera distribution for Kafka on on-premises environments. -Proficiency in designing high-volume, low-latency data pipelines. -Solid knowledge of … Kafka internals – topics, partitions, consumer groups, offset management, etc. -Experience with data serialization formats like Avro, JSON, Protobuf. -Proficient in Java, Scala, or Python for Kafka-based development. -Familiarity with monitoring tools (Prometheus, Grafana, Confluent Control Center, etc.). -Understanding of networking, security (SSL/SASL), and data governance. -Experience with CI/CD pipelines and containerization (Docker, Kubernetes More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
QinetiQ
Are Some Things We’ve Worked On Recently That Might Give You a Better Sense Of What You’ll Be Doing Day To Day Improving systems integration performance, using Apache Ni-Fi, by balancing scaling and flow improvements through chunking Implementing AWS Security Control Policies to manage global access privileges. Validating and Converting data into a common data format … using Avro Schemas and JOLT Designing a proxy in-front of our Kubernetes cluster egress to whitelist traffic and mitigate any security risks Implementing Access/Role Based Access Control in ElasticSearch Writing a React UI using an ATDD approach with Cypress Right Now, We Are Particularly Looking For Core Skills: React, JavaScript, TypeScript, Kubernetes, AWS familiarity, Automated testing More ❯
Engineers to guide teams in building and maintaining complex data pipelines. These pipelines ingest and transform data from diverse sources (e.g., email, CSV, ODBC/JDBC, JSON, XML, Excel, Avro, Parquet) using AWS technologies such as S3, Athena, Redshift, Glue, and programming languages like Python and Java (Docker/Spring) . What You’ll Do Lead the design and More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
QinetiQ Limited
are some things we’ve worked on recently that might give you a better sense of what you’ll be doing day to day: Improving systems integration performance, using Apache Ni-Fi, by balancing scaling and flow improvements through chunking Implementing AWS Security Control Policies to manage global access privileges. Validating and Converting data into a common data format … using Avro Schemas and JOLT Designing a proxy in-front of our Kubernetes cluster egress to whitelist traffic and mitigate any security risks Implementing Access/Role Based Access Control in ElasticSearch Writing a React UI using an ATDD approach with Cypress Right now, we are particularly looking for: Familiarity with: AWS, Other automated testing frameworks (e.g. Playwright, Cucumber More ❯
Salford, England, United Kingdom Hybrid / WFH Options
QinetiQ Limited
are some things we’ve worked on recently that might give you a better sense of what you’ll be doing day to day: Improving systems integration performance, using Apache Ni-Fi, by balancing scaling and flow improvements through chunking Implementing AWS Security Control Policies to manage global access privileges. Validating and Converting data into a common data format … using Avro Schemas and JOLT Designing a proxy in-front of our Kubernetes cluster egress to whitelist traffic and mitigate any security risks Implementing Access/Role Based Access Control in ElasticSearch Writing a React UI using an ATDD approach with Cypress Right now, we are particularly looking for: Familiarity with: AWS, Other automated testing frameworks (e.g. Playwright, Cucumber More ❯
are some things Naimuri have worked on recently that might give you a better sense of what you'll be doing day to day: Improving systems integration performance, using Apache Ni-Fi, by balancing scaling and flow improvements through chunking Implementing AWS Security Control Policies to manage global access privileges. Validating and Converting data into a common data format … using Avro Schemas and JOLT Designing a proxy in-front of our Kubernetes cluster egress to allowlist traffic and mitigate any security risks Implementing Access/Role Based Access Control in ElasticSearch Writing React UI using an ATDD approach with Cypress Improving docker-compose config and README instructions to improve Developer Experience About you We're looking for someone More ❯
Portfolio managers Provide level three support for OpenLink and processes developed by the group Participate in capacity planning and performance/throughput analysis Consuming and publish transaction data in AVRO over Kafka Automation of system maintenance tasks, end-of-day processing jobs, data integrity checks and bulk data loads/extracts Release planning and deployment Build strong relationships with More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
Quant Capital
big firm then don’t worry. Additional exposure to the following is desired : - Tech Stack you will learn Hadoop and Flink RUST, Javascript, React, Redux, Flow Linux, Jenkins Kafka, Avro, Kubernetes, Puppet Involvement in the Java community My client is based London. Home work is encouraged but you will need to be able to come to the City if More ❯
london, south east england, united kingdom Hybrid / WFH Options
Quant Capital
big firm then don’t worry. Additional exposure to the following is desired : - Tech Stack you will learn Hadoop and Flink RUST, Javascript, React, Redux, Flow Linux, Jenkins Kafka, Avro, Kubernetes, Puppet Involvement in the Java community My client is based London. Home work is encouraged but you will need to be able to come to the City if More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Quant Capital
big firm then don’t worry. Additional exposure to the following is desired : - Tech Stack you will learn Hadoop and Flink RUST, Javascript, React, Redux, Flow Linux, Jenkins Kafka, Avro, Kubernetes, Puppet Involvement in the Java community My client is based London. Home work is encouraged but you will need to be able to come to the City if More ❯
such as Relational and Columnar Databases, and/or Data Integration (ETL) or API development. Knowledge of some Data Formats such as JSON, XML, and binary formats such as Avro or Google Protocol Buffers. Experience collaborating with business and technical teams to understand, translate, review, and playback requirements and collaborate to develop Model Engineering solutions. Experience communicating unambiguously through More ❯
and/or distributed databases. Previous experience in monitoring, tracking and optimising cloud compute and storage costs Experience working with RPC protocols and their formats, e.g., gRPC/protobuf, ApacheAvro, etc. Experience with cloud-based (e.g. AWS, GCP, Azure) microservice architecture, event-driven, distributed architectures. Experience working in a fast-paced environment, collaborating across teams and disciplines. More ❯
and analytics on large-scale datasets. Implement and manage Lake Formation and AWS Security Lake , ensuring data governance, access control, and security compliance. Optimise file formats (e.g., Parquet, ORC, Avro) for S3 storage , ensuring efficient querying and cost-effectiveness. Automate infrastructure deployment using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Monitor and troubleshoot data workflows … Security Lake, and Lake Formation. Data Engineering – Proficiency in building and optimising data pipelines and working with large-scale datasets. File Formats & Storage – Hands-on experience with Parquet, ORC, Avro, and efficient S3 storage solutions. DevOps & Automation – Experience with Terraform, CloudFormation, or CDK to automate infrastructure deployment. Security & Compliance – Familiarity with AWS Security Lake, IAM policies, and access control More ❯
engage with both technical and non-technical stakeholders. A strong grasp of Agile practices. Familiarity with NoSQL databases, big data technologies, and integrating diverse data formats (e.g., JSON, Parquet, Avro). Experience with data cataloguing, metadata management, and data lineage tools. Strong troubleshooting and problem-solving skills for complex data challenges. Desirable Experience & Knowledge: Background in building or deploying More ❯
and analytics on large-scale datasets. Implement and manage Lake Formation and AWS Security Lake , ensuring data governance, access control, and security compliance. Optimise file formats (e.g., Parquet, ORC, Avro) for S3 storage , ensuring efficient querying and cost-effectiveness. Automate infrastructure deployment using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Monitor and troubleshoot data workflows More ❯