BW and migration of these data warehouses to modern cloud data platforms. Deep understanding and hands-on experience with big data technologies like Hadoop, HDFS, Hive, Spark and cloud data platform services. Proven track record of designing and implementing large-scale data architectures in complex environments. CICD/DevOps experience More ❯
data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera - Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. More ❯
of distributed computing environments and excellent Linux/Unix system administrator skills. PREFERRED QUALIFICATIONS - Proficient in Hadoop Map-Reduce and its Ecosystem (Zookeeper, HBASE, HDFS, Pig, Hive, Spark, etc). - Good understanding of ETL principles and how to apply them within Hadoop. - Prior working experience with AWS - any or all More ❯
skill set, including hands-on experience with Linux, AWS, Azure, Oracle 19 (admin), Tomcat, UNIX tools, Bash/sh, SQL, Python, Hive, Hadoop/HDFS, and Spark. Work within a modern cloud DevOps environment using Azure, Git, Airflow, Kubernetes, Helm, and Terraform. Demonstrate solid knowledge of computer hardware and network More ❯
london, south east england, united kingdom Hybrid / WFH Options
Kantar Media
skill set, including hands-on experience with Linux, AWS, Azure, Oracle 19 (admin), Tomcat, UNIX tools, Bash/sh, SQL, Python, Hive, Hadoop/HDFS, and Spark. Work within a modern cloud DevOps environment using Azure, Git, Airflow, Kubernetes, Helm, and Terraform. Demonstrate solid knowledge of computer hardware and network More ❯
tools like Ansible, Terraform, Docker, Kafka, Nexus Experience with observability platforms: InfluxDB, Prometheus, ELK, Jaeger, Grafana, Nagios, Zabbix Familiarity with Big Data tools: Hadoop, HDFS, Spark, HBase Ability to write code in Go, Python, Bash, or Perl for automation. Work Experience 5-7+ years of proven experience in previous More ❯
e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred qualifications, capabilities, and skills Knowledge of AWS Knowledge of Databricks Understanding of Cloudera Hadoop, Spark, HDFS, HBase, Hive. Understanding of Maven or Gradle About the Team J.P. Morgan is a global leader in financial services, providing strategic advice and products to More ❯
logical data structures. Encouraging self-learning among the team. Essential Skills & Qualifications: A confident engineer with an authoritative knowledge of Java and Hadoop including HDFS, Hive, and Spark. Comfortable working with large data volumes and able to demonstrate a firm understanding of logical data structures and analysis techniques. Strong skills More ❯
systems: security, configuration, and management Database design, setup, and administration (DBA) experience with Sybase, Oracle, or UDB Big data systems: Hadoop, Snowflake, NoSQL, HBase, HDFS, MapReduce Web and Mobile technologies, digital workflow tools Site reliability engineering and runtime operational tools (agent-based technologies) and processes (capacity, change and incident management More ❯
and using latest technologies Ability to solve ambiguous, unexplored, and cross-team problems effectively Understanding of distributedfile systems such as S3, ADLS, or HDFS Experience with AWS, Azure, and Google Cloud Platform and background in large scale data processing systems (e.g., Hadoop, Spark, etc.) is a plus Ability to More ❯
West Midlands, England, United Kingdom Hybrid / WFH Options
Aubay UK
BI. Proficiency in analytics platforms such as SAS and Python. Familiarity with Amazon Elastic FileSystem (EFS), S3 Storage, and HadoopDistributedFileSystem (HDFS). Key Role Responsibilities Lead the design and development of large-scale data solutions, ensuring they meet business objectives and adhere to regulatory standards. Offer More ❯
Core) coding language. Pyspark profile will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would More ❯
Ripjar specialises in the development of software and data products that help governments and organisations combat serious financial crime. Our technology is used to identify criminal activity such as money laundering and terrorist financing and enables organisations to enforce sanctions More ❯