to streamline data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera - Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, or a More ❯
the following additional languages: Java, C#, C++, Scala Familiarity with Big Data technology in cloud and on-premises environments: Hadoop, HDFS, Spark, NoSQL Databases, Hive, MongoDB, Airflow, Kafka, AWS, Azure, Dockers or Snowflake Good understanding of object-oriented programming (OOP) principles & concepts Familiarity with advanced SQL techniques Familiarity with … data visualization tools such as Tableau or Power BI Familiarity with Apache Flink or Apache Storm Understanding of DevOps practices and tools for (CI/CD) pipelines. Awareness of data security best practices and compliance requirements (e.g., GDPR, HIPA). To Qualify: You should be willing to relocate More ❯
Centre of Excellence. Skills, knowledge and expertise: Deep expertise in the Databricks platform, including Jobs and Workflows, Cluster Management, Catalog Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts. Proven experience working with big data technologies, i.e., Databricks and Apache Spark. Proven experience More ❯
Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13 billion. Job Description: ============= Spark - Must have Scala - Must Have Hive & SQL - Must Have Hadoop - Must Have Communication - Must Have Banking/Capital Markets Domain - Good to have Note: Candidate should know Scala/Python … Core) coding language. Pyspark profile will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming More ❯
london, south east england, united kingdom Hybrid / WFH Options
Kantar Media
a broad IT skill set, including hands-on experience with Linux, AWS, Azure, Oracle 19 (admin), Tomcat, UNIX tools, Bash/sh, SQL, Python, Hive, Hadoop/HDFS, and Spark. Work within a modern cloud DevOps environment using Azure, Git, Airflow, Kubernetes, Helm, and Terraform. Demonstrate solid knowledge of … and network technologies. Experienced in writing and running SQL and Bash scripts to automate tasks and manage data. Skilled in installing, configuring, and managing Hive on Spark with HDFS. Strong analytical skills with the ability to troubleshoot complex issues and analyze large volumes of text or binary data in … Linux or Hive environments. Required You’re enthusiastic and eager to learn, especially in fast-paced, dynamic environments. You enjoy solving problems and take a logical, methodical approach to troubleshooting. Under pressure, you remain calm and focused, effectively prioritizing tasks to meet multiple deadlines. You’re flexible and adaptable More ❯
and Solution Architect teams to design the overall solution architecture for end-to-end data flows. Utilize big data technologies such as Cloudera, Hue, Hive, HDFS, and Spark for data processing and storage. Ensure smooth data management for marketing consent and master data management (MDM) systems. Key Skills and … delivery for streamlined development workflows. Azure Data Factory/DataBricks : Experience with these services is a plus for handling complex data processes. Cloudera (Hue, Hive, HDFS, Spark) : Experience with these big data tools is highly desirable for data processing. Azure DevOps, Vault : Core skills for working in Azure cloud More ❯