Apache Beam Jobs in London

1 to 25 of 48 Apache Beam Jobs in London

Senior Data Backend Engineer (Ranking Ingestion) (Remote - United Kingdom)

London, England, United Kingdom
Hybrid / WFH Options
Yelp USA
we’re looking for great people, not just those who simply check off all the boxes. What you'll do: Work with technologies like Apache Lucene, Apache Flink, Apache Beam, and Kubernetes to build core components of Yelp’s search infrastructure. Design, build, and maintain scalable … and complexity analysis. Comprehensive understanding of systems and application design, including operational and reliability trade-offs. Experience with distributed data processing frameworks such as Apache Flink or Apache Beam. Familiarity with search technologies like Apache Lucene or Elasticsearch is a plus. Experience working with containerized environments and More ❯
Posted:

Data Engineer

London, England, United Kingdom
Hybrid / WFH Options
So Energy
design of data solutions for BigQuery. Expertise in logical and physical data modelling. Hands-on experience using Google Dataflow, GCS, cloud functions, BigQuery, DataProc, Apache Beam (Python) in designing data transformation rules for batch and data streaming. Solid Python programming skills and using Apache Beam (Python More ❯
Posted:

Machine Learning Engineer - Content Understanding

London, England, United Kingdom
Spotify
have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, Apache Beam, or Spark. You care about agile software processes, data-driven development, reliability, and disciplined experimentation You have experience and passion for fostering … Platform is a plus Experience with building data pipelines and getting the data you need to build and evaluate your models, using tools like Apache Beam/Spark is a plus Where You'll Be For this role you should be based in London (UK). #J More ❯
Posted:

Machine Learning Engineer - Personalization

London, United Kingdom
Spotify
have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, Apache Beam, or Spark You care about agile software processes, data-driven development, reliability, and disciplined experimentation You have experience and passion for fostering … scalable Machine learning frameworks Experience with building data pipelines and getting the data you need to build and evaluate your models, using tools like Apache Beam/Spark Where You'll Be We offer you the flexibility to work where you work best! For this role, you can More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Engineering Role: Machine Learning Engineer Personalization London

London, England, United Kingdom
Spotify AB
have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, Apache Beam, or Spark You care about agile software processes, data-driven development, reliability, and disciplined experimentation You have experience and passion for fostering … scalable Machine learning frameworks Experience with building data pipelines and getting the data you need to build and evaluate your models, using tools like Apache Beam/Spark Where You'll Be We offer you the flexibility to work where you work best! For this role, you can More ❯
Posted:

Machine Learning Engineer, II

London, United Kingdom
Spotify
with TensorFlow, PyTorch, Scikit-learn, etc. is a strong plus. You have some experience with large scale, distributed data processing frameworks/tools like Apache Beam, Apache Spark, or even our open source API for it - Scio, and cloud platforms like GCP or AWS. You care about More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Machine Learning Engineer, GenRecs, Personalization

London, United Kingdom
Spotify
architectures for ML frameworks in complex problem spaces in collaboration with product teams Experience with large scale, distributed data processing frameworks/tools like Apache Beam, Apache Spark, and cloud platforms like GCP or AWS Where You'll Be We offer you the flexibility to work where More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Solutions Architect (Data Analytics)

City of London, London, United Kingdom
Vallum Associates
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Posted:

Solutions Architect (Data Analytics)

london, south east england, united kingdom
Vallum Associates
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Posted:

Solutions Architect (Data Analytics)

london (city of london), south east england, united kingdom
Vallum Associates
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Posted:

Solutions Architect (Data Analytics)- Presales, RFP creation

London Area, United Kingdom
Vallum Associates
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Posted:

Solutions Architect (Data Analytics)- Presales, RFP creation

City of London, London, United Kingdom
Vallum Associates
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Posted:

Solutions Architect (Data Analytics)- Presales, RFP creation

london, south east england, united kingdom
Vallum Associates
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Posted:

Solutions Architect (Data Analytics)- Presales, RFP creation

london (city of london), south east england, united kingdom
Vallum Associates
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Posted:

Solutions Architect (Data Analytics)

London, England, United Kingdom
JR United Kingdom
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Posted:

Solutions Architect (Data Analytics)- Presales, RFP creation

Hounslow, England, United Kingdom
JR United Kingdom
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Posted:

Solutions Architect (Data Analytics)- Presales, RFP creation

London, England, United Kingdom
Vallum Associates
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Posted:

Senior Cloud Data Architect

London, England, United Kingdom
ZipRecruiter
Big Data, and Cloud Technologies. Hands-on expertise in at least 2 Cloud platforms (Azure, AWS, GCP, Snowflake, Databricks) and Big Data processing (e.g., Apache Spark, Beam). Proficiency in key technologies like BigQuery, Redshift, Synapse, Pub/Sub, Kinesis, Event Hubs, Kafka, Dataflow, Airflow, and ADF. Strong … with the ability to mentor architects. Mandatory expertise in at least 2 Hyperscalers (GCP/AWS/Azure) and Big Data tools (e.g., Spark, Beam). Desirable: Experience designing Databricks solutions and familiarity with DevOps tools. Coforge is an equal opportunities employer and welcomes applications from all sections of More ❯
Posted:

Python Developer

City of London, England, United Kingdom
VE3
Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, Apache Beam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
Posted:

Python Developer

London, England, United Kingdom
Data Controller, VE Ltd
Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, Apache Beam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
Posted:

Python Developer

London, England, United Kingdom
VE3
Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, Apache Beam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
Posted:

Software Engineer - Data- Tech-Driven Global Hedge Fund

London, United Kingdom
Quality Control Specialist - Pest Control
highest data throughput are implemented in Java. Within Data Engineering, Dataiku, Snowflake, Prometheus, and ArcticDB are used heavily. Kafka is used for data pipelines, Apache Beam for ETL, Bitbucket for source control, Jenkins for continuous integration, Grafana + Prometheus for metrics collection, ELK for log shipping and monitoring More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Platform Engineer

London, England, United Kingdom
easyJet Airline Company PLC
on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). · Experience with Apache Spark or any other distributed data programming frameworks. · Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. · Experience with More ❯
Posted:

GCP Data Engineer-UK

London, England, United Kingdom
Ampstek
as fraud detection, network analysis, and knowledge graphs. Optimize performance of graph queries and design for scalability. Support ingestion of large-scale datasets using Apache Beam, Spark, or Kafka into GCP environments. Implement metadata management, security, and data governance using Data Catalog and IAM. Work across functional teams More ❯
Posted:

Cloud Architect (GCP)

London, England, United Kingdom
Ampstek
as fraud detection, network analysis, and knowledge graphs. Optimize performance of graph queries and design for scalability. Support ingestion of large-scale datasets using Apache Beam, Spark, or Kafka into GCP environments. Implement metadata management, security, and data governance using Data Catalog and IAM. Work across functional teams More ❯
Posted: