Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Big Data, and Cloud Technologies. Hands-on expertise in at least 2 Cloud platforms (Azure, AWS, GCP, Snowflake, Databricks) and Big Data processing (e.g., Apache Spark, Beam). Proficiency in key technologies like BigQuery, Redshift, Synapse, Pub/Sub, Kinesis, Event Hubs, Kafka, Dataflow, Airflow, and ADF. Strong … with the ability to mentor architects. Mandatory expertise in at least 2 Hyperscalers (GCP/AWS/Azure) and Big Data tools (e.g., Spark, Beam). Desirable: Experience designing Databricks solutions and familiarity with DevOps tools. Coforge is an equal opportunities employer and welcomes applications from all sections of More ❯
Technologies Implementation and hands-on experience with at least two Cloud platforms (Azure, AWS, GCP, Snowflake, Databricks) Expertise in Big Data processing services like Apache Spark, Beam, or equivalents on at least two hyperscalers Knowledge of key technologies such as BigQuery, Redshift, Synapse, Pub/Sub, Kinesis, Kafka … of relevant experience Leadership and mentoring abilities Mandatory skills include experience with at least two hyperscalers and technologies such as GCP, AWS, Azure, Spark, Beam, Kafka, Dataflow, Airflow, ADF, Databricks, Jenkins, Terraform, StackDriver #J-18808-Ljbffr More ❯
Technologies Implementation and hands-on experience in Azure, AWS, GCP, Snowflake, Databricks Experience with at least 2 Hyperscalers and Big Data processing services like Apache Spark, Beam Knowledge of technologies like BigQuery, Redshift, Synapse, Pub/Sub, Kinesis, Kafka, Dataflow, Airflow, ADF Consulting experience and ability to design … communication skills Minimum 5 years’ experience Leadership and mentoring abilities Skills in at least 2 Hyperscalers (GCP, AWS, Azure) Experience with Big Data, Spark, Beam, Pub/Sub, Kinesis, Kafka, Dataflow, Airflow, ADF Designing solutions with Databricks, Jenkins, Terraform, StackDriver #J-18808-Ljbffr More ❯
Cloud Technologies Implementation and hands-on experience in Azure, AWS, GCP, Snowflake, Databricks Experience with at least 2 Hyperscalers and Big Data processing services (Apache Spark, Beam, etc.) Knowledge of technologies like BigQuery, Redshift, Synapse, Pub/Sub, Kinesis, Kafka, Dataflow, Airflow, ADF Consulting experience and ability to More ❯
Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, ApacheBeam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, ApacheBeam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, ApacheBeam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
highest data throughput are implemented in Java. Within Data Engineering, Dataiku, Snowflake, Prometheus, and ArcticDB are used heavily. Kafka is used for data pipelines, ApacheBeam for ETL, Bitbucket for source control, Jenkins for continuous integration, Grafana + Prometheus for metrics collection, ELK for log shipping and monitoring More ❯
on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). · Experience with Apache Spark or any other distributed data programming frameworks. · Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. · Experience with More ❯
on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with Apache Spark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with More ❯
on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with Apache Spark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with More ❯
on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with Apache Spark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with More ❯
watford, hertfordshire, east anglia, united kingdom
easyJet
on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with Apache Spark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with More ❯
as fraud detection, network analysis, and knowledge graphs. Optimize performance of graph queries and design for scalability. Support ingestion of large-scale datasets using ApacheBeam, Spark, or Kafka into GCP environments. Implement metadata management, security, and data governance using Data Catalog and IAM. Work across functional teams More ❯
as fraud detection, network analysis, and knowledge graphs. Optimize performance of graph queries and design for scalability. Support ingestion of large-scale datasets using ApacheBeam, Spark, or Kafka into GCP environments. Implement metadata management, security, and data governance using Data Catalog and IAM. Work across functional teams More ❯
as fraud detection, network analysis, and knowledge graphs. Optimize performance of graph queries and design for scalability. Support ingestion of large-scale datasets using ApacheBeam, Spark, or Kafka into GCP environments. Implement metadata management, security, and data governance using Data Catalog and IAM. Work across functional teams More ❯
Southend-on-Sea, England, United Kingdom Hybrid / WFH Options
TN United Kingdom
years hands-on with Google Cloud Platform. Strong experience with BigQuery, Cloud Storage, Pub/Sub, and Dataflow. Proficient in SQL, Python, and Apache Beam. Familiarity with DevOps and CI/CD pipelines in cloud environments. Experience with Terraform, Cloud Build, or similar tools for infrastructure automation. Understanding of … available) Responsibilities: Design, build, and maintain scalable and reliable data pipelines on Google Cloud Platform (GCP) Develop ETL processes using tools like Cloud Dataflow, ApacheBeam, BigQuery, and Cloud Composer Collaborate with data analysts, scientists, and business stakeholders to understand data requirements Optimize performance and cost-efficiency of More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
years hands-on experience with Google Cloud Platform. Strong experience with BigQuery, Cloud Storage, Pub/Sub, and Dataflow. Proficient in SQL, Python, and Apache Beam. Familiarity with DevOps and CI/CD pipelines in cloud environments. Experience with Terraform, Cloud Build, or similar tools for infrastructure automation. Understanding … Responsibilities Design, build, and maintain scalable and reliable data pipelines on Google Cloud Platform (GCP). Develop ETL processes using tools like Cloud Dataflow, ApacheBeam, BigQuery, and Cloud Composer. Collaborate with data analysts, scientists, and business stakeholders to understand data requirements. Optimize performance and cost-efficiency of More ❯
for purpose Experience that will put you ahead of the curve Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (ApacheBeam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure SQL development skills Experience using Dataform or dbt Demonstrated strength in data modelling More ❯
Bath, England, United Kingdom Hybrid / WFH Options
Future
purposefulness. Experience that will put you ahead of the curve: Experience using Python on Google Cloud Platform for Big Data projects, including BigQuery, DataFlow (ApacheBeam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composer. SQL development skills. Experience using Dataform or dbt. Strength in data modeling, ETL More ❯