as messaging and streams. o Building RESTful API Services. o Containerisation, Kubernetes, serverless functions. o Microservices, and distributed tracing. o Enterprise logging, monitoring, and alerting frameworks (e.g., ELK, Splunk, Prometheus, Grafana). o Automation scripting (using scripting languages such as Terraform, Ansible etc.). • Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools. • Experience working More ❯
processing and ETL tools like Apache Kafka, Spark, or Hadoop. Familiarity with containerization and orchestration tools such as Docker and Kubernetes. Experience with monitoring and alerting tools such as Prometheus, Grafana, or ELK for data infrastructure. Understanding of ML algorithms, their development and implementation Confidence developing end-to-end solutions Experience with infrastructure as code e.g. Terraform, Ansible If you More ❯
processing and ETL tools like Apache Kafka, Spark, or Hadoop. Familiarity with containerization and orchestration tools such as Docker and Kubernetes. Experience with monitoring and alerting tools such as Prometheus, Grafana, or ELK for data infrastructure. Understanding of ML algorithms, their development and implementation Confidence developing end-to-end solutions Experience with infrastructure as code e.g. Terraform, Ansible If you More ❯
solutions. Experience with multiprocessing, async I/O, and performance profiling. Unit testing, performance testing, and BDD. Understanding of OAuth 2.0 and secure authorization. Proficiency with observability tools (Grafana, Prometheus, etc.). DevOps and CI/CD (Jenkins, GitOps). Strong communication and collaboration skills. Understanding of deep learning and ML frameworks (TensorFlow, PyTorch). Secure coding practices and cloud More ❯
with ML lifecycle tools, model monitoring, and versioning Exposure to tools like KServe, Ray Serve, Triton, or vLLM is a big plus Bonus Points Experience with observability frameworks like Prometheus or OpenTelemetry Knowledge of ML libraries: TensorFlow, PyTorch, HuggingFace Exposure to Azure or GCP Passion for financial services Qualifications Degree in Computer Science, Engineering, Data Science, or similar What We More ❯
such as Kubernetes Public GitHub repositories for our work Modern development practices such as domain-driven design, test-driven development, continuous integration and continuous delivery Other tools such as Prometheus, Grafana, AppInsights, GitHub Actions, CircleCI, and more Legacy systems use Oracle Database, WebLogic & Forms, and Tungsten TotalAgility & SQL Server Damia Group Limited acts as an employment agency for permanent recruitment More ❯
Google Cloud Platform (GCP) , AWS , and Azure Strong understanding of networking technologies , such as LAN, WAN, firewalls , and related infrastructure Proficient with observability and monitoring tools , e.g Grafana, SolarWinds, Prometheus, AWS CloudWatch, Splunk Familiarity with DevOps practices , including CI/CD pipelines , is beneficial If you would be interested in having a further chat then please send your CV to More ❯
Google Cloud Platform (GCP) , AWS , and Azure Strong understanding of networking technologies , such as LAN, WAN, firewalls , and related infrastructure Proficient with observability and monitoring tools , e.g Grafana, SolarWinds, Prometheus, AWS CloudWatch, Splunk Familiarity with DevOps practices , including CI/CD pipelines , is beneficial If you would be interested in having a further chat then please send your CV to More ❯