diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services More ❯
diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services More ❯
with a focus on data quality and reliability. Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., ApacheFlink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and security … pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as ApacheFlink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. Additional … following prior to applying to GSR? Experience level, applicable to this role? Select How many years have you designed, built, and operated stateful, exactly once streaming pipelines in ApacheFlink (or an equivalent framework such as Spark Structured Streaming or Kafka Streams)? Select Which statement best describes your hands on responsibility for architecting and tuning cloud native data lake More ❯
Head of Data & Analytics Architecture and AI page is loaded Head of Data & Analytics Architecture and AI Apply locations Chiswick Park time type Full time posted on Posted 30+ Days Ago job requisition id JR19765 Want to help us bring More ❯
Experience working in environments with AI/ML components or interest in learning data workflows for ML applications . Bonus if you have e xposure to Kafka, Spark, or Flink . Experience with data compliance regulations (GDPR). What you can expect from us: Opportunity for annual bonuses Medical Insurance Cycle to work scheme Work from home and wellbeing More ❯
Experience working in environments with AI/ML components or interest in learning data workflows for ML applications . Bonus if you have e xposure to Kafka, Spark, or Flink . Experience with data compliance regulations (GDPR). What you can expect from us: Salary 65-75k Opportunity for annual bonuses Medical Insurance Cycle to work scheme Work More ❯
track record of building and managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Programming skills in Python, Java, Scala, or a similar More ❯
track record of building and managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Ability to optimise and refactor existing data pipelines for More ❯
CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data lineage frameworks like Great Expectations More ❯
/ML platforms or other advanced analytics infrastructure. Familiarity with infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with modern data engineering technologies (e.g., Kafka, Spark, Flink, etc.). Why join YouLend? Award-Winning Workplace: YouLend has been recognised as one of the "Best Places to Work 2024" by the Sunday Times for being a supportive More ❯
Grow with us. We are looking for a Machine Learning Engineer to work along the end-to-end ML lifecycle, alongside our existing Product & Engineering team. About Trudenty: The Trudenty Trust Network provides personalised consumer fraud risk intelligence for fraud More ❯
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, ApacheFlink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI/CD … data engineering or backend development. Strong programming skills in Python, Java, or Scala. Hands-on experience with Kafka, Kinesis, or similar messaging systems. Familiarity with stream processing frameworks like Flink, Kafka Streams, or Spark Structured Streaming. Solid understanding of event-driven design patterns (e.g., event sourcing, CQRS). Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as More ❯
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, ApacheFlink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI/CD … data engineering or backend development. Strong programming skills in Python, Java, or Scala. Hands-on experience with Kafka, Kinesis, or similar messaging systems. Familiarity with stream processing frameworks like Flink, Kafka Streams, or Spark Structured Streaming. Solid understanding of event-driven design patterns (e.g., event sourcing, CQRS). Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as More ❯
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, ApacheFlink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI/CD … data engineering or backend development. Strong programming skills in Python, Java, or Scala. Hands-on experience with Kafka, Kinesis, or similar messaging systems. Familiarity with stream processing frameworks like Flink, Kafka Streams, or Spark Structured Streaming. Solid understanding of event-driven design patterns (e.g., event sourcing, CQRS). Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as More ❯
or a related field. Proficiency in Python, Java, and SQL; familiarity with Rust is a plus. Proven track record with cloud platforms (e.g., AWS) and distributed data tools (e.g., Flink, AWS Batch). Strong understanding of data security, quality, and governance principles. Excellent communication and collaboration skills across technical and non-technical teams. Bonus Points For: Experience with orchestration More ❯
or a related field. Proficiency in Python, Java, and SQL; familiarity with Rust is a plus. Proven track record with cloud platforms (e.g., AWS) and distributed data tools (e.g., Flink, AWS Batch). Strong understanding of data security, quality, and governance principles. Excellent communication and collaboration skills across technical and non-technical teams. Bonus Points For: Experience with orchestration More ❯
health Partner with cross-functional teams to deliver robust data solutions 💡 What You’ll Bring Strong hands-on experience building streaming data platforms Deep understanding of tools like Kafka , Flink , Spark Streaming , etc. Proficiency in Python , Java , or Scala Cloud experience with AWS , GCP , or Azure Familiarity with orchestration tools like Airflow , Kubernetes Collaborative, solutions-focused mindset and a More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Atarus
health Partner with cross-functional teams to deliver robust data solutions 💡 What You’ll Bring Strong hands-on experience building streaming data platforms Deep understanding of tools like Kafka , Flink , Spark Streaming , etc. Proficiency in Python , Java , or Scala Cloud experience with AWS , GCP , or Azure Familiarity with orchestration tools like Airflow , Kubernetes Collaborative, solutions-focused mindset and a More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Atarus
health Partner with cross-functional teams to deliver robust data solutions What You’ll Bring Strong hands-on experience building streaming data platforms Deep understanding of tools like Kafka , Flink , Spark Streaming , etc. Proficiency in Python , Java , or Scala Cloud experience with AWS , GCP , or Azure Familiarity with orchestration tools like Airflow , Kubernetes Collaborative, solutions-focused mindset and a More ❯
databases, including PostgreSQL, ClickHouse, Cassandra, and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency in containerization technologies (Docker, Kubernetes) and More ❯
databases, including PostgreSQL, ClickHouse, Cassandra, and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency in containerization technologies (Docker, Kubernetes) and More ❯
databases, including PostgreSQL, ClickHouse, Cassandra, and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency in containerization technologies (Docker, Kubernetes) and More ❯
and non-relational databases. Qualifications/Nice to have Experience with a messaging middleware platform like Solace, Kafka or RabbitMQ. Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark More ❯
programming skills (Python, Java, C++) and experience with DevOps practices (CI/CD). Familiarity with containerization (Docker, Kubernetes), RESTful APIs, microservices architecture, and big data technologies (Hadoop, Spark, Flink). Knowledge of NoSQL databases (MongoDB, Cassandra, DynamoDB), message queueing systems (Kafka, RabbitMQ), and version control systems (Git). Preferred Skills: Experience with natural language processing libraries such as More ❯
Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, ApacheFlink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery) Expertise in building data More ❯