such as Teradata Oracle, SAP BW and migration of these data warehouses to modern cloud data platforms. Deep understanding and hands-on experience with big data technologies like Hadoop, HDFS, Hive, Spark and cloud data platform services. Proven track record of designing and implementing large-scale data architectures in complex environments. CICD/DevOps experience is a plus. Skills: Strong More ❯
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Citigroup Inc
reporting and data science solutions that are Accurate, Reliable, Relevant, Consistent, Complete, Scalable, Timely, Secure, Nimble. Olympus is built on Big data platform and technologies under Cloudera distribution like HDFS, Hive, Impala, Spark, YARN, Sentry, Oozie, Kafka. Our team interfaces with a vast client base and works in close partnership with Operations, Development and other technology counterparts running the application … regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Skills & Qualifications: Working knowledge of various components and technologies under Cloudera distribution like HDFS, Hive, Impala, Spark, YARN, Sentry, Oozie, Kafka. Very good knowledge on analyzing the bottlenecks on the cluster - performance tuning, effective resource usage, capacity planning, investigating. Perform daily performance monitoring of More ❯
London, England, United Kingdom Hybrid / WFH Options
Citi
side development working in Low latency applications Financial background preferable Spark expertise (micro batching, EOD/real time) Python In-memory databases SQL Skills & RDBMS concepts Linux Hadoop Ecosystem (HDFS, Impala, HIVE, HBASE, etc.) Python , R or equivalent scripting language(s) Excellent Excel Analysis skills Good understanding of Investment Banking data A history of delivering against agreed objectives Ability to More ❯
Tech You’ll Work With This business doesn’t do “just one stack”. You’ll be expected to work across a broad tech landscape: Big Data & Distributed Systems: HDFS, Hadoop, Spark, Kafka Cloud: Azure or AWS Programming: Python, Java, Scala, PySpark – you’ll need two or more, Python preferred Data Engineering Tools: Azure Data Factory, Databricks, Delta Lake, Azure More ❯
Cambourne, England, United Kingdom Hybrid / WFH Options
Remotestar
Familiarity with distributed computing frameworks such as Spark, KStreams, Kafka. Experience with Kafka and streaming frameworks. Understanding of monolithic vs. microservice architectures. Familiarity with Apache ecosystem including Hadoop modules (HDFS, YARN, HBase, Hive, Spark) and Apache NiFi. Experience with containerization and orchestration tools like Docker and Kubernetes. Knowledge of time-series or analytics databases such as Elasticsearch. Experience with AWS More ❯
Cambridge, England, United Kingdom Hybrid / WFH Options
RemoteStar
Connect and Streaming Frameworks such as Kafka. Knowledge on Monolithic versus Microservice Architecture concepts for building large-scale applications. Familiar with the Apache suite including Hadoop modules such as HDFS, Yarn, HBase, Hive, Spark as well as Apache NiFi. Familiar with containerization and orchestration technologies such as Docker, Kubernetes. Familiar with Time-series or Analytics Databases such as Elasticsearch. Experience More ❯
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Citigroup Inc
reporting and data science solutions that are Accurate, Reliable, Relevant, Consistent, Complete, Scalable, Timely, Secure, Nimble. Olympus is built on Big data platform and technologies under Cloudera distribution like HDFS, Hive, Impala, Spark, YARN, Sentry, Oozie, Kafka. Our team interfaces with a vast client base and works in close partnership with Operations, Development and other technology counterparts running the application … transparency. Skills & Qualifications: Experience in an Application Support role. Hands-on experience in supporting applications built in Hadoop. Working knowledge of various components and technologies under Cloudera distribution like HDFS, Hive, Impala, Spark, YARN, Sentry, Oozie, Kafka. Experienced in Linux Very good knowledge on analyzing the bottlenecks on the cluster - performance tuning, effective resource usage, capacity planning, investigating. Perform daily More ❯
vSphere, KVM, Kubernetes. Experience with tools like Ansible, Terraform, Docker, Kafka, Nexus. Experience with observability platforms: InfluxDB, Prometheus, ELK, Jaeger, Grafana, Nagios, Zabbix. Familiarity with Big Data tools: Hadoop, HDFS, Spark, HBase. Ability to write code in Go, Python, Bash, or Perl for automation. Work Experience 6-8 years of proven experience in previous roles or one of the following More ❯
vSphere, KVM, Kubernetes. Experience with tools like Ansible, Terraform, Docker, Kafka, Nexus Experience with observability platforms: InfluxDB, Prometheus, ELK, Jaeger, Grafana, Nagios, Zabbix Familiarity with Big Data tools: Hadoop, HDFS, Spark, HBase Ability to write code in Go, Python, Bash, or Perl for automation. Work Experience 5-7+ years of proven experience in previous roles or one of the More ❯
deployment) Linux and Windows operating systems: security, configuration, and management Database design, setup, and administration (DBA) experience with Sybase, Oracle, or UDB Big data systems: Hadoop, Snowflake, NoSQL, HBase, HDFS, MapReduce Web and Mobile technologies, digital workflow tools Site reliability engineering and runtime operational tools (agent-based technologies) and processes (capacity, change and incident management, job/batch management) Email More ❯
Experience of providing third line support & working closely with application support, operations, and infrastructure teams for debugging & issue resolution. • Knowledge of Big Data platforms such as Spark, Scala, Hive, HDFS, Map Reduce, Yarn, HBASE, or Kafka is an advantage. Visa is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will More ❯
relevant solutions to ensure design constraints are met by the software team Ability to initiate and implement ideas to solve business problems Preferred qualifications, capabilities, and skills Knowledge of HDFS, Hadoop, Databricks Knowledge of Airflow, Control-M Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker Familiarity with troubleshooting common networking technologies and issues About Us J.P. More ❯
London, England, United Kingdom Hybrid / WFH Options
Deutsche Bank
and build tools such as Maven, Bower, and SBT Experience in architecting and deploying big data applications using Apache Hadoop in cloud scenarios Expertise in Hadoop ecosystem components like HDFS, Hive, HBase, Spark, Ranger, Kafka, YARN Knowledge of Kubernetes, Docker, and security practices such as Kerberos, LDAP, SSL Ability to read and modify open-source Java repositories How we'll More ❯
London, England, United Kingdom Hybrid / WFH Options
Deutsche Bank
Proven experience in architecting, designing, building, and deploying big data applications using the Apache Hadoop ecosystem in hybrid cloud and private cloud scenarios. Expert knowledge of Apache Hadoop ecosystem HDFS, Hive, HBase, Spark, Ranger, Kafka, Yarn etc.). Proficiency in Kubernetes & Docker. Strong understanding of security practices within big data environments such as Kerberos, LDAP , POSIX and SSL. Ability to More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Ripjar
curiosity and interest in learning more. In this role, you will be using python (specifically pyspark) and Node.js for processing data, backed by various Hadoop stack technologies such as HDFS and HBase. MongoDB and Elasticsearch are used for indexing smaller datasets. Airflow & Nifi are used to co-ordinate the processing of data, while Jenkins, Jira, Confluence and Github are used … have a curiosity and interest in learning more. You will be using Python (specifically pyspark) and Node.js for processing data You will be using Hadoop stack technologies such as HDFS and HBase Experience using MongoDB and Elasticsearch for indexing smaller datasets would be beneficial Experience using Airflow & Nifi to co-ordinate the processing of data would be beneficial You will More ❯
Description- Spark - Must have Scala - Must Have Hive/SQL - Must Have Job Description : Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies , real time data processing platform(Spark Streaming) experience would be an advantage. • Consistently demonstrates More ❯
Description- Spark - Must have Scala - Must Have Hive/SQL - Must Have Job Description : Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies , real time data processing platform(Spark Streaming) experience would be an advantage. • Consistently demonstrates More ❯
Do you want your voice heard and your actions to count? Discover your opportunity with Mitsubishi UFJ Financial Group (MUFG), one of the world’s leading financial groups. Across the globe, we’re 120,000 colleagues, striving to make a More ❯