Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Smart DCC
real-time data processing pipelines using platforms like Apache Kafka or cloud-native tools. Optimize batch processing workflows with tools like Apache Spark and Flink for scalable performance. Infrastructure Automation: Implement Infrastructure as Code (IaC) using tools like Terraform and Ansible. Leverage cloud-native services (AWS, Azure) to streamline More ❯
and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency in More ❯
and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency in More ❯
with testing frameworks like Jest, Cypress, or React Testing Library. Experience with authentication strategies using OAuth, JWT, or Cognito Familiarity with Apache Spark/Flink for real-time data processing is an advantage. Hands-on experience with CI/CD tools Commercial awareness and knowledge of public sector. Excellent More ❯
experience in cross-functional teams and able to communicate effectively about technical and operational challenges. Preferred Qualifications: Proficiency with scalable data frameworks (Spark, Kafka, Flink) Proven Expertise with Infrastructure as Code and Cloud best practices Proficiency with monitoring and logging tools (e.g., Prometheus, Grafana) Working at Lila Sciences, you More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
Hargreaves Lansdown
volume data/low latency data pipeline with the following skills. Data Engineering Skills Modelling Orchestration using Apache Airflow Cloud native streaming pipelines using Flink, Beam etc. DBT Snowflake Infrastructure Skills Terraform Devops Skills Experienced in developing CI/CD pipelines Integration Skills REST and Graph APIs (Desirable) Serverless More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Yelp USA
services that power features like Review Insights, Business Summaries, and Conversational Search. Build real-time and batch data processing pipelines using technologies like Kafka, Flink, and Spark. Ensure high availability, performance, and reliability of backend systems in a production environment. Contribute to the team's engineering best practices and More ❯
and managing tech partnerships. Nice to have AWS Solution Architect certification. Docker, Kubernetes, or container orchestration tools. Observability tools (e.g. New Relic) experience. Kafka, Flink, or IoT streaming tech exposure. Background in financial services or regulated environments. What's in it for you? Competitive salary : Up to More ❯
language such as Java or Python; web-development skills and middlewares like Springboot, database, caches; and Bigdata tech stack as Hadoop/Spark/Flink/Kafka Clean coding habits, attention to detail, focus on quality and best practice. Good oral and written communication in English. Experience on contributing More ❯
ensure high availability and scalability. Integrating Kafka with other data processing tools and platforms such as Kafka Streams, Kafka Connect, Spark Streaming Schema Registry, Flink and Beam. Collaborating with cross-functional teams to understand data requirements and design solutions that meet business needs. Implementing security measures to protect Kafka … Extensive Experience with Apache Kafka and real-time architecture including event driven frameworks. Strong Knowledge of Kafka Streams, Kafka Connect, Spark Streaming, Schema Registry, Flink and Beam. Experience with cloud platform such as GCP Pub/Sub. Excellent problem-solving skills. Knowledge & Experience/Qualifications: Knowledge of Kafka data More ❯
ensure high availability and scalability. Integrating Kafka with other data processing tools and platforms such as Kafka Streams, Kafka Connect, Spark Streaming Schema Registry, Flink and Beam. Collaborating with cross-functional teams to understand data requirements and design solutions that meet business needs. Implementing security measures to protect Kafka … Extensive Experience with Apache Kafka and real-time architecture including event driven frameworks. Strong Knowledge of Kafka Streams, Kafka Connect, Spark Streaming, Schema Registry, Flink and Beam. Experience with cloud platform such as GCP Pub/Sub. Excellent problem-solving skills. Knowledge & Experience/Qualifications: Knowledge of Kafka data More ❯