London, England, United Kingdom Hybrid / WFH Options
So Energy
design of data solutions for BigQuery. Expertise in logical and physical data modelling. Hands-on experience using Google Dataflow, GCS, cloud functions, BigQuery, DataProc, ApacheBeam (Python) in designing data transformation rules for batch and data streaming. Solid Python programming skills and using ApacheBeam (Python More ❯
have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, ApacheBeam, or Spark. You care about agile software processes, data-driven development, reliability, and disciplined experimentation You have experience and passion for fostering … Platform is a plus Experience with building data pipelines and getting the data you need to build and evaluate your models, using tools like ApacheBeam/Spark is a plus Where You'll Be For this role you should be based in London (UK). #J More ❯
have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, ApacheBeam, or Spark You care about agile software processes, data-driven development, reliability, and disciplined experimentation You have experience and passion for fostering … scalable Machine learning frameworks Experience with building data pipelines and getting the data you need to build and evaluate your models, using tools like ApacheBeam/Spark Where You'll Be We offer you the flexibility to work where you work best! For this role, you can More ❯
have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, ApacheBeam, or Spark You care about agile software processes, data-driven development, reliability, and disciplined experimentation You have experience and passion for fostering … scalable Machine learning frameworks Experience with building data pipelines and getting the data you need to build and evaluate your models, using tools like ApacheBeam/Spark Where You'll Be We offer you the flexibility to work where you work best! For this role, you can More ❯
with TensorFlow, PyTorch, Scikit-learn, etc. is a strong plus. You have some experience with large scale, distributed data processing frameworks/tools like ApacheBeam, Apache Spark, or even our open source API for it - Scio, and cloud platforms like GCP or AWS. You care about More ❯
architectures for ML frameworks in complex problem spaces in collaboration with product teams Experience with large scale, distributed data processing frameworks/tools like ApacheBeam, Apache Spark, and cloud platforms like GCP or AWS Where You'll Be We offer you the flexibility to work where More ❯
advanced expertise in Google Cloud data services: Dataproc, Dataflow, Pub/Sub, BigQuery, Cloud Spanner, and Bigtable. Hands-on experience with orchestration tools like Apache Airflow or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (ApacheBeam), Dataproc … Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. Scala is mandated in some cases. Deep understanding of data lakehouse design, event-driven architecture, and hybrid cloud data strategies. More ❯
advanced expertise in Google Cloud data services: Dataproc, Dataflow, Pub/Sub, BigQuery, Cloud Spanner, and Bigtable. Hands-on experience with orchestration tools like Apache Airflow or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (ApacheBeam), Dataproc … Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. Scala is mandated in some cases. Deep understanding of data lakehouse design, event-driven architecture, and hybrid cloud data strategies. More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … in a similar role. Ability to lead and mentor the architects. Required Skills : Mandatory Skills [at least 2 Hyperscalers]: GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF. More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … in a similar role. Ability to lead and mentor the architects. Required Skills : Mandatory Skills [at least 2 Hyperscalers]: GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF. More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years' experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache Spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, ApacheBeam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, ApacheBeam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). · Experience with Apache Spark or any other distributed data programming frameworks. · Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. · Experience with More ❯
on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with Apache Spark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with More ❯
as fraud detection, network analysis, and knowledge graphs. Optimize performance of graph queries and design for scalability. Support ingestion of large-scale datasets using ApacheBeam, Spark, or Kafka into GCP environments. Implement metadata management, security, and data governance using Data Catalog and IAM. Work across functional teams More ❯
as fraud detection, network analysis, and knowledge graphs. Optimize performance of graph queries and design for scalability. Support ingestion of large-scale datasets using ApacheBeam, Spark, or Kafka into GCP environments. Implement metadata management, security, and data governance using Data Catalog and IAM. Work across functional teams More ❯
as fraud detection, network analysis, and knowledge graphs. Optimize performance of graph queries and design for scalability. Support ingestion of large-scale datasets using ApacheBeam, Spark, or Kafka into GCP environments. Implement metadata management, security, and data governance using Data Catalog and IAM. Work across functional teams More ❯
as fraud detection, network analysis, and knowledge graphs. Optimize performance of graph queries and design for scalability. Support ingestion of large-scale datasets using ApacheBeam, Spark, or Kafka into GCP environments. Implement metadata management, security, and data governance using Data Catalog and IAM. Work across functional teams More ❯