Apache Beam Jobs in England

8 of 8 Apache Beam Jobs in England

Data Engineer

City of London, London, England, United Kingdom
Equiniti
in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as Apache Spark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and More ❯
Employment Type: Full-Time
Salary: Competitive salary
Posted:

Data Architect

Basildon, England, United Kingdom
Coforge
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based More ❯
Posted:

Data Engineer (Football Club)

London Area, United Kingdom
Hybrid/Remote Options
Singular Recruitment
Advanced-level Python for data applications and high proficiency in SQL (query tuning, complex joins) Hands-on experience designing and deploying ETL/ELT pipelines using Google Cloud Dataflow (Apache Beam) or similar tools Proficiency in data architecture, data modeling, and scalable storage design Solid engineering practices: Git and CI/CD for data systems Highly Desirable Skills More ❯
Posted:

Data Engineer (Football Club)

City of London, London, United Kingdom
Hybrid/Remote Options
Singular Recruitment
Advanced-level Python for data applications and high proficiency in SQL (query tuning, complex joins) Hands-on experience designing and deploying ETL/ELT pipelines using Google Cloud Dataflow (Apache Beam) or similar tools Proficiency in data architecture, data modeling, and scalable storage design Solid engineering practices: Git and CI/CD for data systems Highly Desirable Skills More ❯
Posted:

Senior Data Engineer

London Area, United Kingdom
Nicholson Search and Selection
. Strong problem-solving skills, attention to detail, and ability to work in a collaborative environment. Nice to Have Exposure to real-time data processing and streaming architectures (Kafka, Beam, or similar). Experience working with modern analytics stacks and machine learning pipelines. If you’re passionate about building high-performing data systems and want to join a company More ❯
Posted:

Senior Data Engineer

City of London, London, United Kingdom
Nicholson Search and Selection
. Strong problem-solving skills, attention to detail, and ability to work in a collaborative environment. Nice to Have Exposure to real-time data processing and streaming architectures (Kafka, Beam, or similar). Experience working with modern analytics stacks and machine learning pipelines. If you’re passionate about building high-performing data systems and want to join a company More ❯
Posted:

Data Platform Engineer

Luton, England, United Kingdom
Hybrid/Remote Options
easyJet
field. Technical Skills Required • Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). • Experience with Apache Spark or any other distributed data programming frameworks. • Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. • Experience with cloud infrastructure like AWS or More ❯
Posted:

Senior Data Platform Engineer

Luton, England, United Kingdom
Hybrid/Remote Options
easyJet
indexing, partitioning. Hands-on IaC development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data More ❯
Posted: