BW and migration of these data warehouses to modern cloud data platforms. Deep understanding and hands-on experience with big data technologies like Hadoop, HDFS, Hive, Spark and cloud data platform services. Proven track record of designing and implementing large-scale data architectures in complex environments. CICD/DevOps experience More ❯
tools like Ansible, Terraform, Docker, Kafka, Nexus Experience with observability platforms: InfluxDB, Prometheus, ELK, Jaeger, Grafana, Nagios, Zabbix Familiarity with Big Data tools: Hadoop, HDFS, Spark, HBase Ability to write code in Go, Python, Bash, or Perl for automation. Work Experience 5-7+ years of proven experience in previous More ❯
constraints are met by the software team Ability to initiate and implement ideas to solve business problems Preferred qualifications, capabilities, and skills Knowledge of HDFS, Hadoop, Databricks Knowledge of Airflow, Control-M Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker Familiarity with troubleshooting common networking technologies More ❯
systems: security, configuration, and management Database design, setup, and administration (DBA) experience with Sybase, Oracle, or UDB Big data systems: Hadoop, Snowflake, NoSQL, HBase, HDFS, MapReduce Web and Mobile technologies, digital workflow tools Site reliability engineering and runtime operational tools (agent-based technologies) and processes (capacity, change and incident management More ❯
Association of Collegiate Conference and Events Directors-International
as a member of an agile feature team. Help maintain code quality via code reviews. Skill Requirements Proficiency in administrating Big Data technologies (Hadoop, HDFS, Spark, Hive, Yarn, Oozie, Kafka, HBase, Apache stack). Proficiency in defining highly scalable platform architecture. Knowledge of architectural design patterns, highly optimized, low latency More ❯
Cheltenham, Gloucestershire, United Kingdom Hybrid / WFH Options
Ripjar
more. In this role, you will be using python (specifically pyspark) and Node.js for processing data, backed by various Hadoop stack technologies such as HDFS and HBase. MongoDB and Elasticsearch are used for indexing smaller datasets. Airflow & Nifi are used to co-ordinate the processing of data, while Jenkins, Jira … in learning more. You will be using Python (specifically pyspark) and Node.js for processing data You will be using Hadoop stack technologies such as HDFS and HBase. Experience using MongoDB and Elasticsearch for indexing smaller datasets would be beneficial. Experience using Airflow & Nifi to co-ordinate the processing of data More ❯
Ripjar specialises in the development of software and data products that help governments and organisations combat serious financial crime. Our technology is used to identify criminal activity such as money laundering and terrorist financing and enables organisations to enforce sanctions More ❯