Technical Discipline. Technical Expertise: Proficiency in SQL and experience with cloud-based data pipelines (Azure, AWS, GCP). Familiarity with big data tools like Hadoop and Spark. Data Management Skills: Hands-on experience working with large data sets, data pipelines, workflow management tools, and Azure cloud services. Exposure to more »
automation & configuration management; Ansible (plus Puppet, Saltstack), Terraform, CloudFormation; NodeJS, REACT/MaterialUI (plus Angular), Python, JavaScript; Big data processing and analysis, e.g. ApacheHadoop (CDH), Apache Spark; RedHat Enterprise Linux, CentOS, Debian or Ubuntu. Java 8, Spring framework (preferably Spring boot), AMQP RabbitMQ, Open source technologies; Experience of more »
Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to create data pipelines on a more »
Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage more »