RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & SparkstreamingMore ❯
RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & SparkstreamingMore ❯
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based … Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (SparkStreaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent More ❯
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based … Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (SparkStreaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent More ❯
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based … Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (SparkStreaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent More ❯
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based … Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (SparkStreaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent More ❯
london (city of london), south east england, united kingdom
Ubique Systems
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based … Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (SparkStreaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent More ❯
experience across AWS Glue, Lambda, Step Functions, RDS, Redshift, and Boto3. Proficient in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building Real Time event streaming pipelines (eg, Kafka, SparkStreaming, Kinesis). Proven experience developing modern data architectures … data governance including GDPR. Bonus Points For Expertise in Data Modelling, schema design, and handling both structured and semi-structured data. Familiarity with distributed systems such as Hadoop, Spark, HDFS, Hive, Databricks. Exposure to AWS Lake Formation and automation of ingestion and transformation layers. Background in delivering solutions for highly regulated industries. Passion for mentoring and enabling data More ❯
Sheffield, South Yorkshire, England, United Kingdom Hybrid / WFH Options
Vivedia Ltd
of ETL/ELT pipelines , data modeling , and data warehousing . Experience with cloud platforms (AWS, Azure, GCP) and tools like Snowflake, Databricks, or BigQuery . Familiarity with streaming technologies (Kafka, SparkStreaming, Flink) is a plus. Tools & Frameworks: Airflow, dbt, Prefect, CI/CD pipelines, Terraform. Mindset: Curious, data-obsessed, and driven to More ❯