data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera - Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. More ❯
at least one of the following additional languages: Java, C#, C++, Scala Familiarity with Big Data technology in cloud and on-premises environments: Hadoop, HDFS, Spark, NoSQL Databases, Hive, MongoDB, Airflow, Kafka, AWS, Azure, Dockers or Snowflake Good understanding of object-oriented programming (OOP) principles & concepts Familiarity with advanced SQL More ❯
london, south east england, united kingdom Hybrid / WFH Options
Kantar Media
skill set, including hands-on experience with Linux, AWS, Azure, Oracle 19 (admin), Tomcat, UNIX tools, Bash/sh, SQL, Python, Hive, Hadoop/HDFS, and Spark. Work within a modern cloud DevOps environment using Azure, Git, Airflow, Kubernetes, Helm, and Terraform. Demonstrate solid knowledge of computer hardware and network More ❯
Core) coding language. Pyspark profile will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would More ❯