usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform More ❯
usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform More ❯
like Apache Airflow, dbt , or custom frameworks. Strong experience with cloud data platforms (e.g., AWS, Azure, GCP) and distributed data systems ( Spark, Kafka, or Flink, etc) Proficiency in Python (or similar languages) with solid software engineering fundamentals (testing, modularity, version control). Hands-on experience with SQL and NoSQL More ❯
/CD pipelines. Big Data & Data Engineering : Strong background in processing large datasets and building data pipelines using platforms like Apache Spark , Databricks , ApacheFlink , or similar big data tools. Experience with batch and stream processing. Security : In-depth knowledge of security practices in cloud environments, including identity management More ❯
/CD pipelines. Big Data & Data Engineering: Strong background in processing large datasets and building data pipelines using platforms like Apache Spark , Databricks , ApacheFlink , or similar big data tools. Experience with batch and stream processing. Security: In-depth knowledge of security practices in cloud environments, including identity management More ❯
ETL processes, and database design Demonstrated ability to architect solutions for big data challenges Preferred Qualifications Experience with real-time data processing (Kafka, Kinesis, Flink) Knowledge of containerization and infrastructure-as-code (Docker, Kubernetes, Terraform) Familiarity with MLOps practices and tools (MLflow, Kubeflow, etc.) Experience with data governance frameworks More ❯
managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Programming skills in Python More ❯
managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Ability to optimise and More ❯
London, England, United Kingdom Hybrid / WFH Options
Lloyds Banking Group
databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. Containerisation More ❯
IaC) using Terraform and Terragrunt. Experience with scheduling systems, with Airflow preferred. Basic understanding of data streaming and messaging frameworks (Kafka, Spark Structured Streaming, Flink, etc.). Good knowledge of the Spark framework and its deployment with cloud services. Preferred Qualifications: Academic qualification in computer science or a STEM More ❯
London, England, United Kingdom Hybrid / WFH Options
IDEXX
AI/ML components or interest in learning data workflows for ML applications . Bonus if you have e xposure to Kafka, Spark, or Flink . Experience with data compliance regulations (GDPR). What You Can Expect From Us Opportunity for annual bonuses Medical Insurance Cycle to work scheme More ❯
AI/ML components or interest in learning data workflows for ML applications . Bonus if you have e xposure to Kafka, Spark, or Flink . Experience with data compliance regulations (GDPR). What you can expect from us: Salary 65-75k Opportunity for annual bonuses Medical Insurance More ❯
models for real-time analytics. Proven experience in managing real-time data pipelines across multiple initiatives. Expertise in distributed streaming platforms (Kafka, Spark Streaming, Flink). Experience with GCP (preferred), AWS, or Azure for real-time data ingestion and storage. Strong programming skills in Python, Java, or Scala. Proficiency More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Lloyds Bank plc
databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. Containerisation More ❯
workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data More ❯
workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data More ❯
workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data More ❯
workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data More ❯
workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data More ❯
workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data More ❯
workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data More ❯
workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data More ❯
workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data More ❯
workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data More ❯
workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data More ❯