management tasks. Operations Environment ITIL: Basic knowledge. SDLC: Basic understanding of Software Development Life Cycle processes. Monitoring Tools Experience with Tools: New Relic (preferred), Grafana, DynaTrace, Zabbix. DevOps Practices: CI/CD Pipelines: Knowledge specifically on MS Azure DevOps Platform. Scripting Languages Powershell: Ability to write scripts. Understand Yaml and more »
Solid experience with Python Solid experience of Observability tooling. Good experience in dashboard creation/data visualisation using tools such as Google Looker, or Grafana Strong CI/CD experience Strong containerisation experience Serverless Cloud experience, across any cloud Networking Security/Cloud Security more »
Edinburgh, Midlothian, Scotland, United Kingdom Hybrid / WFH Options
McGregor Boyall Associates Limited
DB Postgres AWS IAM, S3, EC2, RDS Ansible Typescript CDK and AWS dev tools such as Cloud Formation SQL Monitoring solutions such as CloudWatch, Grafana Design and implementation of solutions using a service-based and serverless architecture. Agile delivery models such as Scrum and Kanban. Cloud database monitoring, telemetry, intelligence more »
Telford, Shropshire, United Kingdom Hybrid / WFH Options
Experis
years experience S3 and AWS Hashicorp vault Talend AWS EFS AWS S3 Agile Scrum AWS Data Encryption and APIs Agile working Monitoring and Alerting (Grafana, Telegraf, Cloudwatch) All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able more »
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Damia Group Ltd
Resilient Distributed Data Sets and understand any memory related problems and make corrective recommendations Able to monitor Spark jobs using wider tools such as Grafana to see whether there are Cluster level failures. As a Spark architect, who can demonstrate deep knowledge of how Cloudera Spark is set up and … Frames/Resilient Distributed Data Sets and understand any memory related problems and make corrective recommendations. Monitoring -Spark jobs using wider tools such as Grafana to see whether there are Cluster level failures. Cloudera (CDP) Spark and how the run time libraries are used by PySpark code. Prophecy - High level more »