Solutions Architect to join our Customer Success team in EMEA. In this highly technical role, you will design, implement, and optimize real-time data streaming solutions, focusing specifically on Apache Flink and Ververica's Streaming Data Platform. You'll collaborate directly with customers and cross-functional teams, leveraging deep expertise in distributed systems, event-driven architectures, and cloud-native … implementation, architecture consulting, and performance optimization. Key Responsibilities Analyze customer requirements and design scalable, reliable, and efficient stream-processing solutions Provide technical implementation support and hands-on expertise deploying Apache Flink and Ververica's platform in pre-sales and post-sales engagements Develop prototypes and proof-of-concept (PoC) implementations to validate and showcase solution feasibility and performance Offer … technical reviews, and promote best practices in stream processing Deliver professional services engagements, including technical training sessions, workshops, and performance optimization consulting Act as a subject matter expert on Apache Flink, real-time stream processing, and distributed architectures Create and maintain high-quality technical documentation, reference architectures, best-practice guides, and whitepapers Stay informed on emerging streaming technologies, cloud More ❯
Azure, AWS, GCP) Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, Apache Flink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery … Expertise in building data architectures that support batch and streaming paradigms Experience with standards such as JSON, XML, YAML, Avro, Parquet Strong communication skills Open to learning new technologies, methodologies, and skills As the successful Data Engineering Manager you will be responsible for: Building and maintaining data pipelines Identifying and patching issues and bugs identified in the pipeline/ More ❯
Load (ETL) data engineering Experience with REST APIs Experience with Java Experience with full lifecycle agile software development projects Desired skills: Experience with Python. Experience building data products in ApacheAvro and/or Parquet On-the-job experience with Java software development. Experience deploying the complete DevOps Lifecycle including integration of build pipelines, automated deployments, and compliance More ❯
and data lake ecosystems e.g. Databricks, Snowflake etc. Well versed in designing data structures, event schemas and database schemas. Well versed in file formats such as CSV, JSON, Parquet, Avro and Iceberg. Solid and opinionated knowledge of testing methodologies. Solid and opinionated knowledge of coding principles and coding standards. Well versed with standard SDLC practices and tooling around build More ❯
and data lake ecosystems e.g. Databricks, Snowflake etc. Well versed in designing data structures, event schemas and database schemas. Well versed in file formats such as CSV, JSON, Parquet, Avro and Iceberg. Solid and opinionated knowledge of testing methodologies. Solid and opinionated knowledge of coding principles and coding standards. Well versed with standard SDLC practices and tooling around build More ❯
and data lake ecosystems e.g. Databricks, Snowflake etc. Well versed in designing data structures, event schemas and database schemas. Well versed in file formats such as CSV, JSON, Parquet, Avro and Iceberg. Solid and opinionated knowledge of testing methodologies. Solid and opinionated knowledge of coding principles and coding standards. Well versed with standard SDLC practices and tooling around build More ❯
Knowledge of Data Management technologies such as Relational and Columnar Databases, Data Integration (ETL), or API development. Familiarity with data formats like JSON, XML, and binary formats such as Avro or Google Protocol Buffers. Experience working with business and technical teams to develop Model Engineering solutions. Proficiency with tools like SQL, JavaScript, or Python for data analysis. Strong communication More ❯
such as Relational and Columnar Databases, and/or Data Integration (ETL) or API development. Knowledge of some Data Formats such as JSON, XML, and binary formats such as Avro or Google Protocol Buffers. Experience collaborating with business and technical teams to understand, translate, review, and playback requirements and collaborate to develop Model Engineering solutions. Exposure to working with More ❯
Apache Kafka Engineer My client is looking for an Senior Apache Kafka Engineer to lead the design, development, and management of our enterprise event streaming platform. This role requires deep Kafka expertise, strong system design skills, and hands-on experience managing large-scale, production-grade deployments. Key Responsibilities Own and evolve a critical Kafka infrastructure: assess, stabilize, and … optimize architecture. Design and implement scalable, event-driven systems across environments (dev, staging, prod). Develop and maintain Kafka clusters, topics, partitions, schemas (Avro), and connectors. Integrate Kafka with external systems and ensure reliability, security, and observability. Troubleshoot delivery issues, latency, consumer lag, and performance bottlenecks. Drive documentation, training, incident resolution, and continuous improvement. Qualifications 5+ years in software More ❯
Salford, England, United Kingdom Hybrid / WFH Options
Naimuri
are some things we’ve worked on recently that might give you a better sense of what you’ll be doing day to day: Improving systems integration performance, using Apache Ni-Fi, by balancing scaling and flow improvements through chunking Implementing AWS Security Control Policies to manage global access privileges. Validating and Converting data into a common data format … using Avro Schemas and JOLT Designing a proxy in-front of our Kubernetes cluster egress to whitelist traffic and mitigate any security risks Implementing Access/Role Based Access Control in ElasticSearch Writing a React UI using an ATDD approach with Cypress Right now, we are particularly looking for: Familiarity with: AWS, Other automated testing frameworks (e.g. Playwright, Cucumber More ❯
are some things Naimuri have worked on recently that might give you a better sense of what you'll be doing day to day: Improving systems integration performance, using Apache Ni-Fi, by balancing scaling and flow improvements through chunking Implementing AWS Security Control Policies to manage global access privileges. Validating and Converting data into a common data format … using Avro Schemas and JOLT Designing a proxy in-front of our Kubernetes cluster egress to allowlist traffic and mitigate any security risks Implementing Access/Role Based Access Control in ElasticSearch Writing React UI using an ATDD approach with Cypress Improving docker-compose config and README instructions to improve Developer Experience About you We're looking for someone More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
QinetiQ
are some things we’ve worked on recently that might give you a better sense of what you’ll be doing day to day: Improving systems integration performance, using Apache Ni-Fi, by balancing scaling and flow improvements through chunking Implementing AWS Security Control Policies to manage global access privileges. Validating and Converting data into a common data format … using Avro Schemas and JOLT Designing a proxy in-front of our Kubernetes cluster egress to whitelist traffic and mitigate any security risks Implementing Access/Role Based Access Control in ElasticSearch Writing a React UI using an ATDD approach with Cypress Right now, we are particularly looking for: Core Skills: React, JavaScript, TypeScript, Kubernetes, AWS familiarity, Automated testing More ❯
Portfolio managers Provide level three support for OpenLink and processes developed by the group Participate in capacity planning and performance/throughput analysis Consuming and publishing transaction data in AVRO over Kafka Automation of system maintenance tasks, end-of-day processing jobs, data integrity checks and bulk data loads/extracts Release planning and deployment Build strong relationships with More ❯
a schema language to formally define all data at Bloomberg, complete with schema evolution, versioning, and true point in time semantics. We were the first team to introduce Kafka, Avro, company-wide Dataset Schema Registry, Mesos, Clustered MySQL, Vitess and Spark for ETL, at Bloomberg for designing this new data intensive platform that is the hub of financial datasets. More ❯
client is innovative and accountable. Additional Exposure Exposure to the following is desired: Tech Stack you will learn Hadoop and Flink RUST, JavaScript, React, Redux, Flow Linux, Jenkins Kafka, Avro, Kubernetes, Puppet Involvement in the Java community My client is based in London. #J-18808-Ljbffr More ❯
and analytics on large-scale datasets. Implement and manage Lake Formation and AWS Security Lake , ensuring data governance, access control, and security compliance. Optimise file formats (e.g., Parquet, ORC, Avro) for S3 storage , ensuring efficient querying and cost-effectiveness. Automate infrastructure deployment using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Monitor and troubleshoot data workflows … Security Lake, and Lake Formation. Data Engineering – Proficiency in building and optimising data pipelines and working with large-scale datasets. File Formats & Storage – Hands-on experience with Parquet, ORC, Avro, and efficient S3 storage solutions. DevOps & Automation – Experience with Terraform, CloudFormation, or CDK to automate infrastructure deployment. Security & Compliance – Familiarity with AWS Security Lake, IAM policies, and access control More ❯
such as Relational and Columnar Databases, and/or Data Integration (ETL) or API development. Knowledge of some Data Formats such as JSON, XML, and binary formats such as Avro or Google Protocol Buffers. Experience collaborating with business and technical teams to understand, translate, review, and playback requirements and collaborate to develop Model Engineering solutions. Experience communicating unambiguously through More ❯
and/or distributed databases. Previous experience in monitoring, tracking and optimising cloud compute and storage costs Experience working with RPC protocols and their formats, e.g., gRPC/protobuf, ApacheAvro, etc. Experience with cloud-based (e.g. AWS, GCP, Azure) microservice architecture, event-driven, distributed architectures. Experience working in a fast-paced environment, collaborating across teams and disciplines. More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
Quant Capital
big firm then don’t worry. Additional exposure to the following is desired : - Tech Stack you will learn Hadoop and Flink RUST, Javascript, React, Redux, Flow Linux, Jenkins Kafka, Avro, Kubernetes, Puppet Involvement in the Java community My client is based London. Home work is encouraged but you will need to be able to come to the City if More ❯
and analytics on large-scale datasets. Implement and manage Lake Formation and AWS Security Lake , ensuring data governance, access control, and security compliance. Optimise file formats (e.g., Parquet, ORC, Avro) for S3 storage , ensuring efficient querying and cost-effectiveness. Automate infrastructure deployment using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Monitor and troubleshoot data workflows More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
William Hill
a solution architect, technical business analyst or principal developer looking for your next adventure. Passion for solution design and know your REST from your AMQP, your JSON from your Avro, your TLS from your TTL and everything in between and you'll have a well-rounded technical repertoire covering software design patterns, infrastructure design, business and process analysis, messaging More ❯
and design workshops including estimating, scoping and delivering customer proposals aligned with Analytics Solutions. Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, Avro, Parquet, Iceberg, Hudi). Experience developing software and data engineering code in one or more programming languages (Java, Python, PySpark, Node, etc). AWS and other Data and AI More ❯
multiple heterogenous data sources. • Good knowledge of warehousing and ETLs. Extensive knowledge of popular database providers such as SQL Server, PostgreSQL, Teradata and others. • Proficiency in technologies in the Apache Hadoop ecosystem, especially Hive, Impala and Ranger • Experience working with open file and table formats such Parquet, AVRO, ORC, Iceberg and Delta Lake • Extensive knowledge of automation and More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
QAD Inc
and pgx Uses database migration patterns, such as, “expand and contract” using go-migrate Writing observable and testable code using libraries such as testify and mockgen Publishing and consuming Avro formatted Kafka messages CI/CD GitHub Actions Trunk Based Development & Continuous Delivery Good collaboration skills at all levels with cross-functional teams Highly developed ownerships and creative thinking More ❯