South East London, England, United Kingdom Hybrid / WFH Options
Singular Recruitment
for complex querying and performance tuning. ETL/ELT Pipelines: Proven experience designing, building, and maintaining production-grade data pipelines using Google Cloud Dataflow (ApacheBeam) or similar technologies. GCP Stack: Hands-on expertise with BigQuery , Cloud Storage , Pub/Sub , and orchestrating workflows with Composer or Vertex More ❯
London, England, United Kingdom Hybrid / WFH Options
Singular Recruitment
for complex querying and performance tuning. ETL/ELT Pipelines: Proven experience designing, building, and maintaining production-grade data pipelines using Google Cloud Dataflow (ApacheBeam) or similar technologies. GCP Stack: Hands-on expertise with BigQuery , Cloud Storage , Pub/Sub , and orchestrating workflows with Composer or Vertex More ❯
TensorFlow, PyTorch, or JAX. Knowledge of data analytics concepts, including data warehouse technical architectures, ETL and reporting/analytic tools and environments (such as ApacheBeam, Hadoop, Spark, Pig, Hive, MapReduce, Flume). Customer facing experience of discovery, assessment, execution, and operations. Demonstrated excellent communication, presentation, and problem More ❯
et al.) and a clear understanding of when not to use them. Experience with message queues (SQS, PubSub, RabbitMQ etc.) and data pipelines (Kafka, Beam, Kinesis, etc.) You are an effective team player with effective communication, presentation and influencing skills. You have a passion for improving coding and development More ❯
et al.) and a clear understanding of when not to use them. Experience with message queues (SQS, PubSub, RabbitMQ etc.) and data pipelines (Kafka, Beam, Kinesis, etc.) You are an effective team player with effective communication, presentation and influencing skills. You have a passion for improving coding and development More ❯
et al.) and a clear understanding of when not to use them. Experience with message queues (SQS, PubSub, RabbitMQ etc.) and data pipelines (Kafka, Beam, Kinesis, etc.). Effective team player with excellent communication, presentation, and influencing skills. Passion for improving coding and development practices. Experience working with microservices More ❯
as fraud detection, network analysis, and knowledge graphs. - Optimize performance of graph queries and design for scalability. - Support ingestion of large-scale datasets using ApacheBeam, Spark, or Kafka into GCP environments. - Implement metadata management, security, and data governance using Data Catalog and IAM. - Work across functional teams More ❯
London, England, United Kingdom Hybrid / WFH Options
Starling Bank
or all of the services below would put you at the top of our list: Google Cloud Storage. Google Data Transfer Service. Google Dataflow (ApacheBeam). Google PubSub. Google CloudRun. BigQuery or any RDBMS. Python. Debezium/Kafka. dbt (Data Build tool). Interview process Interviewing is More ❯
or all of the services below would put you at the top of our list Google Cloud Storage Google Data Transfer Service Google Dataflow (ApacheBeam) Google PubSub Google CloudRun BigQuery or any RDBMS Python Debezium/Kafka dbt (Data Build tool) Interview process Interviewing is a two More ❯
and application level. You have knowledge of cloud based ML solutions from GCP or AWS Experience with streaming data processing frameworks such as Flink, Beam, Spark, Kafka Streams Experience with Ansible, Terraform, GitHub Actions, Infrastructure as Code, AWS or other cloud ecosystems Knowledge/interest in payment platforms, foreign More ❯
London, England, United Kingdom Hybrid / WFH Options
Scope3
/Next.js for frontend applications Low latency + high throughput Golang API Big Query Data warehouse Airflow for batch orchestration Temporal for event orchestration ApacheBeam (dataflow runner) for some batch jobs Most transformations are performed via SQL directly in Big Query. The Role We are excited to More ❯
London, England, United Kingdom Hybrid / WFH Options
Spotify
growth, and collaboration within the team. Who You Are Experienced with Data Processing Frameworks: Skilled with higher-level JVM-based frameworks such as Flink, Beam, Dataflow, or Spark. Comfortable with Ambiguity: Able to work through loosely defined problems and thrive in autonomous team environments. Skilled in Cloud-based Environments More ❯
London, England, United Kingdom Hybrid / WFH Options
Spotify AB
growth, and collaboration within the team. Who You Are Experienced with Data Processing Frameworks: Skilled with higher-level JVM-based frameworks such as Flink, Beam, Dataflow, or Spark. Comfortable with Ambiguity: Able to work through loosely defined problems and thrive in autonomous team environments. Skilled in Cloud-based Environments More ❯
process, and model biometric and survey data. Managing and optimizing this process E2E is your remit. We’re currently migrating our pipelines to use Beam/DataFlow with a BigQuery sink and shifting our DB from Postgres to BigQuery. From there, we have lots of value to extract from More ❯
development experience with Terraform or CloudFormation. · Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. · Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) · Familiarity with Databricks as a data and AI platform or the More ❯
Are You have proven experience in data engineering, including creating reliable, efficient, and scalable data pipelines using data processing frameworks such as Scio, DataFlow, Beam or equivalent. You are comfortable working with large datasets using SQL and data analytics platforms such as BigQuery. You are knowledgeable in cloud-based More ❯
Degree in CS, maths, statistics, engineering, physics or similar Desirable Requirements: NoSQL databases - Elasticsearch, MongoDB etc (bonus) Modern Data tools such as Spark/Beam (bonus) Streaming technologies such as Spark/Akka Streams (bonus) Tagged as: Industry , NLP , United Kingdom More ❯
London, England, United Kingdom Hybrid / WFH Options
Lloyds Banking Group
to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. Containerisation Good … understanding of cloud storage, networking, and resource provisioning. It would be great if you had... Certification in GCP “Professional Data Engineer”. Certification in Apache Kafka (CCDAK). Proficiency across the data lifecycle. WORKING FOR US Our focus is to ensure we are inclusive every day, building an organisation More ❯
with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model … Familiarity with reinforcement learning or generative AI models. Tools & Technologies: Languages: Python, SQL (optionally: Scala, Java for large-scale systems) Data Processing: Pandas, NumPy, Apache Spark, Beam Model Serving: TensorFlow Serving, TorchServe, FastAPI, Flask Experiment Tracking & Monitoring: MLflow, Neptune.ai, Weights & Biases #J-18808-Ljbffr More ❯
with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with model … Familiarity with reinforcement learning or generative AI models. Tools & Technologies: Languages: Python, SQL (optionally: Scala, Java for large-scale systems) Data Processing: Pandas, NumPy, Apache Spark, Beam Model Serving: TensorFlow Serving, TorchServe, FastAPI, Flask Experiment Tracking & Monitoring: MLflow, Neptune.ai, Weights & Biases #J-18808-Ljbffr More ❯
Location Remote, with occasional company meetings in Bristol (maximum 1x a month) Beam Connectivity are a startup in the automotive IoT space. We work with established and up-and-coming vehicle manufacturers to deliver best-in-class connected vehicle experiences. After a successful first 5 years and announcement of … nor the responsibility for the end-to-end system. Delivering a robust automotive IoT solution requires a wide variety of skills and experience. At Beam, we are a truly multi-disciplinary team, covering all the skills required to deliver a first class connected experience. Our flagship product is the … one roof, so you’ll be exposed to all this technology at one time or other. This should excite you, not scare you... At Beam, we spend our engineering energy on three main things: Building out our core CVaaS platform - Building new features, adding resilience, and rolling this out More ❯
positive team environment! What You'll Do Work with large-scale data pipelines with data processing frameworks like Scio, BigQuery, Google Cloud Platform and Apache Beam. Develop, deploy, and operate Java services that impact millions of users. Work towards supporting machine learning projects powering the experience that suits each … You are familiar with the concepts of data modeling, data access, and data storage techniques. You are familiar with distributed data processing frameworks (ex: Beam, Spark). You want to work on a team employing agile software development processes, data-driven development, and responsible experimentation. You value opportunities to More ❯