messaging frameworks and/or distributed tracing/monitoring, this will put you in a good position. Tech stack includes AWS, GCP, Azure, Kafka, Spark, Zipkin, OpenTracing, Prometheus, Grafana, ELK stack, Micrometer metrics, Docker, Kubernetes, Helm, automating deployment, releases, testing in CI, continuous delivery pipelines. more »
ReactJs, JavaScript, CSS etc. Desired Skills: Cloud technologies (AWS, GCP, Azure, etc.) Security frameworks/standards Understanding of data streaming and messaging frameworks (Kafka, Spark, etc.) Understanding of containers (Docker, Kubernetes, Helm, etc.) Experience in automating deployment, releases and testing in continuous integration, continuous delivery pipelines. Please apply or more »
and unstructured data, and curate data to provide real-time contextualized insights. Manage full data lifecycle , experience using Microsoft Azure, SQL server, Hadoop ecosystem, Spark, and Kafka, and building capabilities to host a wider set of technologies. When team expands mentoring of new data team members Adopt best data … Experience you will have: Experience working successfully within a start up or scale up business Previous work with big data tech such as Hadoop, Spark, Kafka. Good working knowledge of the use of containers including Docker and Kubernetes , and experience with working in Microsoft Azure platform. Coding/scripting … CD exposure. Experience working with a range of data sources including API, flat files, and databases. Desirable: Are Microsoft Certified Azure Data Engineer Are Apache Kafka (CCDAK) certified. What you will get in return 36 Days holiday (more than the UK average) 10% Bonus % of shares in the business more »
Python Developer Location: Glasgow Duration: 6 months Job description: As a Python/Spark Big Data Software Engineer you will serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for …/CD, Applicant Resiliency, and Security Preferred qualifications, capabilities, and skills: · Skilled with Python or PySpark · Exposure to cloud technologies (Airflow, Astronomer, Kubernetes, AWS, Spark, Kafka) · Experience with Big Data solutions or Relational DB. · Experience in Financial Service Industry is a bonus. more »
scripting. * Minimum of 1 year experience in investment banking or the financial sector. * Performance Tuning of Oracle/MySQL/Hive SQL Queries/Spark SQL Statements. * Experience in working with large databases - multi terabytes (3+ Terabytes). * Minimum of 5 years' experience in Big Data Space (Hive, Impala … Spark Sql, HDFS etc). * Any cloud experience (AWS/Azure/Google/Oracle). * Solid experience with Oracle objects(Packages,Procedure,functions) * Very clear concepts on Oracle architecture. * Very strong debugging skills. * Proficient in query tuning. * Detail oriented * Strong written and verbal communication skills * Ability to work more »