maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
or Kotlin. Experience with at least one of the following cloud providers: Amazon Web Services (AWS), Google Cloud Compute (GCP), or Microsoft Azure. Spark, Hive, or Presto. Desirable Skills Familiarity with the Scala programming language and popular frameworks such as: Cats, Cats Effect, ZIO, and http4s. Familiarity with both … Terraform, and infrastructure as code (IaC) best practices. Familiarity with Python programming language when applied to Spark and machine learning. Familiarity with Databricks and Apache Airflow products. Required Education & Experience Bachelor’s degree in Computer Science, Information Systems, Software, Electrical or Electronics Engineering, or comparable field of study, and more »
Data Engineer 6 Month Contract Inside IR35 £450/day Hiring Immediately Job Description (Apache Iceberg, Spark, Big Data) Job Details Overview: Overall IT experience of 5+ years of total experience with strong programming skills Excellent skill in Apache Iceberg, Spark, Big Data 3+ years of Big Data … project development experience Hands on experience in working areas like Apache Iceberg & Spark, Hadoop, Hive Must have knowledge in any Database Ex: Postgres, Oracle, MongoDB Excellent in SDLC Processes and DevOps knowledge (Jira, Jenkins pipeline) Working in Agile POD and with team collaboration Ability to participate in deep more »
in Unix and shell scripting. * Minimum of 1 year experience in investment banking or the financial sector. * Performance Tuning of Oracle/MySQL/Hive SQL Queries/Spark SQL Statements. * Experience in working with large databases - multi terabytes (3+ Terabytes). * Minimum of 5 years' experience in Big … Data Space (Hive, Impala, Spark Sql, HDFS etc). * Any cloud experience (AWS/Azure/Google/Oracle). * Solid experience with Oracle objects(Packages,Procedure,functions) * Very clear concepts on Oracle architecture. * Very strong debugging skills. * Proficient in query tuning. * Detail oriented * Strong written and verbal communication more »
in Unix and shell scripting. * Minimum of 1 year experience in investment banking or the financial sector. * Performance Tuning of Oracle/MySQL/Hive SQL Queries/Spark SQL Statements. * Experience in working with large databases - multi terabytes (3+ Terabytes). * Minimum of 5 years' experience in Big … Data Space (Hive, Impala, Spark Sql, HDFS etc). * Any cloud experience (AWS/Azure/Google/Oracle). * Solid experience with Oracle objects(Packages,Procedure,functions) * Very clear concepts on Oracle architecture. * Very strong debugging skills. * Proficient in query tuning. * Detail oriented * Strong written and verbal communication more »