be advantageous if you had Experience working with relational and experience with NoSQL databases (such as tuning and optimising complex queries for highly scalable systems) and query languages (specifically Hive/SparkSQL and ANSI SQL).Experience building large scale Spark 3.x applications & data pipelines, ideally with Batch processing running on Hadoop clusters. If you had experience with messaging queues More ❯
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: Please screen the profile before interview. At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: = Scala/Spark Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL … Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) Experience in Big data technologies, Real Time data processing platform (Spark Streaming) experience would be an advantage. Consistently demonstrates clear and concise written and verbal communication A history of delivering against agreed objectives Ability to multi-task and work under pressure Demonstrated problem solving and decision-making skills Excellent analytical More ❯
Modelling : Physical, logical, conceptual models; data flow diagrams; ontologies; UML/Visio/Sparx Data Standards : Technical specs, code assurance, championing interoperability Metadata Management : Data catalogues, repositories; tools like Apache Atlas, Hive Metastore, AWS Glue/Datazone Data Design : Data lakes, warehouses, lakehouses, pipelines, meshes, marketplaces If you're passionate about shaping data strategy and architecture in a More ❯