diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services More ❯
diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services More ❯
with a focus on data quality and reliability. Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., ApacheFlink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and security … pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as ApacheFlink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. Additional … following prior to applying to GSR? Experience level, applicable to this role? Select How many years have you designed, built, and operated stateful, exactly once streaming pipelines in ApacheFlink (or an equivalent framework such as Spark Structured Streaming or Kafka Streams)? Select Which statement best describes your hands on responsibility for architecting and tuning cloud native data lake More ❯
Architect to join our Customer Success team in EMEA. In this highly technical role, you will design, implement, and optimize real-time data streaming solutions, focusing specifically on ApacheFlink and Ververica's Streaming Data Platform. You'll collaborate directly with customers and cross-functional teams, leveraging deep expertise in distributed systems, event-driven architectures, and cloud-native technologies … architecture consulting, and performance optimization. Key Responsibilities Analyze customer requirements and design scalable, reliable, and efficient stream-processing solutions Provide technical implementation support and hands-on expertise deploying ApacheFlink and Ververica's platform in pre-sales and post-sales engagements Develop prototypes and proof-of-concept (PoC) implementations to validate and showcase solution feasibility and performance Offer architectural … reviews, and promote best practices in stream processing Deliver professional services engagements, including technical training sessions, workshops, and performance optimization consulting Act as a subject matter expert on ApacheFlink, real-time stream processing, and distributed architectures Create and maintain high-quality technical documentation, reference architectures, best-practice guides, and whitepapers Stay informed on emerging streaming technologies, cloud platforms More ❯
building, maintaining, and optimizing CI/CD pipelines. Big Data & Data Engineering: Strong background in processing large datasets and building data pipelines using platforms like Apache Spark , Databricks , ApacheFlink , or similar big data tools. Experience with batch and stream processing. Security: In-depth knowledge of security practices in cloud environments, including identity management, encryption, and secure application development. More ❯
Head of Data & Analytics Architecture and AI page is loaded Head of Data & Analytics Architecture and AI Apply locations Chiswick Park time type Full time posted on Posted 30+ Days Ago job requisition id JR19765 Want to help us bring More ❯
London, England, United Kingdom Hybrid / WFH Options
Lloyds Banking Group
with relational and non-relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. Containerisation Good knowledge of containers ( Docker More ❯
London, England, United Kingdom Hybrid / WFH Options
IDEXX Laboratories, Inc
Kubernetes). Experience working in environments with AI/ML components or interest in learning data workflows for ML applications. Bonus if you have exposure to Kafka, Spark, or Flink . Experience with data compliance regulations (GDPR). What you can expect from us: Opportunity for annual bonuses Medical Insurance Cycle to work scheme Work from home and wellbeing More ❯
Experience working in environments with AI/ML components or interest in learning data workflows for ML applications . Bonus if you have e xposure to Kafka, Spark, or Flink . Experience with data compliance regulations (GDPR). What you can expect from us: Salary 65-75k Opportunity for annual bonuses Medical Insurance Cycle to work scheme Work More ❯
track record of building and managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Programming skills in Python, Java, Scala, or a similar More ❯
track record of building and managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Ability to optimise and refactor existing data pipelines for More ❯
/ML platforms or other advanced analytics infrastructure. Familiarity with infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with modern data engineering technologies (e.g., Kafka, Spark, Flink, etc.). Why join YouLend? Award-Winning Workplace: YouLend has been recognised as one of the "Best Places to Work 2024" by the Sunday Times for being a supportive More ❯
London, England, United Kingdom Hybrid / WFH Options
Methods
NiFi and Apache Airflow to automate data flows and manage complex workflows within hybrid environments. Event Streaming Experience: Utilise event-driven technologies such as Kafka, Apache NiFi, and ApacheFlink to handle real-time data streams effectively. Security and Compliance: Manage security setups and access controls, incorporating tools like Keycloak to protect data integrity and comply with legal standards More ❯
e.g., Hadoop, Spark). · Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow etc · Good knowledge of stream and batch processing solutions like ApacheFlink, Apache Kafka/· Good knowledge of log management, monitoring, and analytics solutions like Splunk, Elastic Stack, New Relic etc Given that this is just a short snapshot of the More ❯
London, England, United Kingdom Hybrid / WFH Options
Cloudbeds
shares knowledge openly, helps each other unblock challenges, and values collective wins over individual credit. Access to Quality Tooling & Infrastructure: Having the right modern tools (e.g., Airflow, dbt, Spark, Flink, cloud platforms) and the ability to improve them when needed sets the foundation for effective work. Supportive Leadership & Growth Mindset: Managers who advocate for your growth, give regular feedback More ❯
push the boundaries of data engineering and analytics but also contribute back to the open-source community through continuous innovation and solution development. Golang, Python, Java, dbt, Airflow, Kafka, Flink, Kubernetes, Terraform, Prometheus, Grafana, and more. What you’ll do: This role will allow you to master the three pillars of every organisation: Software Engineering, Infrastructure, and Data. Software … Engineering: Develop microservices, libraries, Flink jobs, data pipelines, and Kubernetes controllers. Stakeholder Collaboration: Work closely with both technical and non-technical stakeholders to define, design, and implement solutions within our Data Platform. Trusted relationship will be key for success. Lead Technological Initiatives: Drive forward market-leading projects, exploring and integrating new technologies into our ecosystem. Platform as a Product … requires not only deep technical expertise but also a proactive, engaging approach to working with others and building lasting partnerships. Big Data Technologies: Familiarity with tools such as Kafka, Flink, dbt, and Airflow, with a deep understanding of distributed computing and large-scale data processing systems. Nice to Have: Kubernetes Expertise: Experience with Kubernetes, Helm, ArgoCD, and related technologies. More ❯
Grow with us. We are looking for a Machine Learning Engineer to work along the end-to-end ML lifecycle, alongside our existing Product & Engineering team. About Trudenty: The Trudenty Trust Network provides personalised consumer fraud risk intelligence for fraud More ❯
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, ApacheFlink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI/CD … data engineering or backend development. Strong programming skills in Python, Java, or Scala. Hands-on experience with Kafka, Kinesis, or similar messaging systems. Familiarity with stream processing frameworks like Flink, Kafka Streams, or Spark Structured Streaming. Solid understanding of event-driven design patterns (e.g., event sourcing, CQRS). Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as More ❯
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, ApacheFlink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI/CD … data engineering or backend development. Strong programming skills in Python, Java, or Scala. Hands-on experience with Kafka, Kinesis, or similar messaging systems. Familiarity with stream processing frameworks like Flink, Kafka Streams, or Spark Structured Streaming. Solid understanding of event-driven design patterns (e.g., event sourcing, CQRS). Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as More ❯
London, England, United Kingdom Hybrid / WFH Options
Apollo Solutions
Experience (Nice to Have) Familiarity with dbt, Fivetran, Apache Airflow, Data Mesh, Data Vault 2.0, Fabric, and Apache Spark Experience working with streaming technologies such as Apache Kafka, ApacheFlink, or Google Cloud Dataflow Hands-on experience with modern data orchestration tools like Dagster or Prefect Knowledge of data governance and cataloging tools like Great Expectations, Collibra, or Alation More ❯
or a related field. Proficiency in Python, Java, and SQL; familiarity with Rust is a plus. Proven track record with cloud platforms (e.g., AWS) and distributed data tools (e.g., Flink, AWS Batch). Strong understanding of data security, quality, and governance principles. Excellent communication and collaboration skills across technical and non-technical teams. Bonus Points For: Experience with orchestration More ❯
are recognised by industry leaders like Gartner's Magic Quadrant, Forrester Wave and Frost Radar. Our tech stack: Superset and similar data visualisation tools. ETL tools: Airflow, DBT, Airbyte, Flink, etc. Data warehousing and storage solutions: ClickHouse, Trino, S3. AWS Cloud, Kubernetes, Helm. Relevant programming languages for data engineering tasks: SQL, Python, Java, etc. What you will be doing More ❯
Get AI-powered advice on this job and more exclusive features. Company Description We’re ASOS, the online retailer for fashion lovers all around the world. We exist to give our customers the confidence to be whoever they want to More ❯
challenges of dealing with large data sets, both structured and unstructured Used a range of open source frameworks and development tools, e.g. NumPy/SciPy/Pandas, Spark, Kafka, Flink Working knowledge of one or more relevant database technologies, e.g. Oracle, Postgres, MongoDB, ArcticDB. Proficient on Linux Advantageous: An excellent understanding of financial markets and instruments An understanding of More ❯
focus on data quality and reliability. Infrastructure & Architecture Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., ApacheFlink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and security … pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as ApacheFlink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. Additional More ❯