Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Smart DCC
real-time data processing pipelines using platforms like Apache Kafka or cloud-native tools. Optimize batch processing workflows with tools like Apache Spark and Flink for scalable performance. Infrastructure Automation: Implement Infrastructure as Code (IaC) using tools like Terraform and Ansible. Leverage cloud-native services (AWS, Azure) to streamline More ❯
AI/ML components or interest in learning data workflows for ML applications . Bonus if you have e xposure to Kafka, Spark, or Flink . Experience with data compliance regulations (GDPR). What you can expect from us: Opportunity for annual bonuses Medical Insurance Cycle to work scheme More ❯
managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Programming skills in Python More ❯
managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Ability to optimise and More ❯
engineering experience - Experience with data modeling, warehousing, and building ETL pipelines - Bachelor's degree - Knowledge of batch and streaming data architectures like Kafka, Kinesis, Flink, Storm, Beam - Knowledge of distributed systems as it pertains to data storage and computing - Experience programming with at least one modern language such as More ❯
models for real-time analytics. Proven experience in managing real-time data pipelines across multiple initiatives. Expertise in distributed streaming platforms (Kafka, Spark Streaming, Flink). Experience with GCP (preferred), AWS, or Azure for real-time data ingestion and storage. Strong programming skills in Python, Java, or Scala . More ❯
models for real-time analytics. Proven experience in managing real-time data pipelines across multiple initiatives. Expertise in distributed streaming platforms (Kafka, Spark Streaming, Flink). Experience with GCP (preferred), AWS, or Azure for real-time data ingestion and storage. Strong programming skills in Python, Java, or Scala . More ❯
and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency in More ❯
advanced analytics infrastructure. Familiarity with infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with modern data engineering technologies (e.g., Kafka, Spark, Flink, etc.). Why join YouLend? Award-Winning Workplace: YouLend has been recognised as one of the "Best Places to Work 2024" by the Sunday More ❯
development experience using SQL. Hands-on experience with MPP databases such as Redshift, BigQuery, or Snowflake, and modern transformation/query engines like Spark, Flink, Trino. Familiarity with workflow management tools (e.g., Airflow) and/or dbt for transformations. Comprehensive understanding of modern data platforms, including data governance and More ❯
/Nice to have Experience with a messaging middleware platform like Solace, Kafka or RabbitMQ. Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark More ❯
and experience with DevOps practices (CI/CD). Familiarity with containerization (Docker, Kubernetes), RESTful APIs, microservices architecture, and big data technologies (Hadoop, Spark, Flink). Knowledge of NoSQL databases (MongoDB, Cassandra, DynamoDB), message queueing systems (Kafka, RabbitMQ), and version control systems (Git). Preferred Skills: Experience with natural More ❯
efficient integration into Feast feature store. Requirements Good knowledge of programming languages such as Python or Java. Strong experience with streaming technologies (Spark, PySpark, Flink, KSQL or similar) for developing data transformation pipelines. Solid understanding and practical experience with SQL and relational databases (PostgreSQL preferred). Proficiency with AWS More ❯
OAuth, JWT, and data encryption. Fluent in English with strong communication and collaboration skills. Preferred Qualifications Experience with big data processing frameworks like ApacheFlink or Spark. Familiarity with machine learning models and AI-driven analytics. Understanding of front-end and mobile app interactions with backend services. Expertise in More ❯
london, south east england, United Kingdom Hybrid / WFH Options
eTeam
OAuth, JWT, and data encryption. Fluent in English with strong communication and collaboration skills. Preferred Qualifications Experience with big data processing frameworks like ApacheFlink or Spark. Familiarity with machine learning models and AI-driven analytics. Understanding of front-end and mobile app interactions with backend services. Expertise in More ❯
with testing frameworks like Jest, Cypress, or React Testing Library. Experience with authentication strategies using OAuth, JWT, or Cognito Familiarity with Apache Spark/Flink for real-time data processing is an advantage. Hands-on experience with CI/CD tools Commercial awareness and knowledge of public sector. Excellent More ❯
experience in cross-functional teams and able to communicate effectively about technical and operational challenges. Preferred Qualifications: Proficiency with scalable data frameworks (Spark, Kafka, Flink) Proven Expertise with Infrastructure as Code and Cloud best practices Proficiency with monitoring and logging tools (e.g., Prometheus, Grafana) Working at Lila Sciences, you More ❯
developing and operating distributed systems or applications at large scale - Experience working on open table formats (Iceberg, Hudi, Delta) or query engines (Spark, Trino, Flink etc) is a huge plus - Experience contributing to open source code bases, and collaborating with open source communities Amazon is an equal opportunity employer More ❯
RabbitMQ, Pulsar, etc.). Experience in setting up data platforms and standards, not just pipelines. Experience with distributed data processing frameworks (e.g., Spark or Flink). About the Team J.P. Morgan is a global leader in financial services, providing strategic advice and products to the world's most prominent More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
Hargreaves Lansdown Asset Management Limited
volume data/low latency data pipeline with the following skills. Data Engineering Skills Modelling Orchestration using Apache Airflow Cloud native streaming pipelines using Flink, Beam etc. DBT Snowflake Infrastructure Skills Terraform Devops Skills Experienced in developing CI/CD pipelines Integration Skills REST and Graph APIs (Desirable) Serverless More ❯
Cheltenham, Gloucestershire, South West, United Kingdom
Defence
and backend development within the defence and security sector Technical proficiency in: Spring Boot Java Enterprise development React/VueJS/AngularJS Apache Nifi Flink Desired technical skills (at least 3 of the following): Ansible Docker Kubernetes Grafana/Prometheus Linux Sys Admin for deployed Clusters (10's of More ❯
OAuth, JWT, and data encryption. Fluent in English with strong communication and collaboration skills. Preferred Qualifications Experience with big data processing frameworks like ApacheFlink or Spark. Familiarity with machine learning models and AI-driven analytics. Understanding of front-end and mobile app interactions with backend services. Expertise in More ❯
entire spectrum of AWS Services - Storage (Redshift Data Shares, S3 data lakes), Orchestration (Step Functions, Glue, and Internal Java Based Orchestration Tools), Processing (Spark & Flink - KDA), Streaming services (AWS Kinesis) and real-time large scale event aggregation stores. Build and scale our ingestion pipeline for scale, speed, reliability, and More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Yelp USA
services that power features like Review Insights, Business Summaries, and Conversational Search. Build real-time and batch data processing pipelines using technologies like Kafka, Flink, and Spark. Ensure high availability, performance, and reliability of backend systems in a production environment. Contribute to the team's engineering best practices and More ❯
leeds, west yorkshire, yorkshire and the humber, United Kingdom
Pyramid Consulting, Inc
maintain smooth operations. Integration & Data Processing: Integrate Kafka with key data processing tools and platforms, including Kafka Streams , Kafka Connect , Apache Spark Streaming , ApacheFlink , Apache Beam , and Schema Registry . This integration will facilitate data stream processing, event-driven architectures, and continuous data pipelines. Cross-functional Collaboration: Work More ❯