Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Made Tech Limited
as Code (IaC) and deploying infrastructure across environments Managing cloud infrastructure with a DevOps approach Handling and transforming various data types (JSON, CSV, etc.) using Apache Spark, Databricks, or Hadoop Understanding modern data system architectures (Data Warehouse, Data Lakes, Data Meshes) and their use cases Creating data pipelines on cloud platforms with error handling and reusable libraries Documenting and More ❯
Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies e.g. NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable for Data Devops Engineer: Jupyter Hub Awareness More ❯
Tableau) Machine Learning Fundamentals (e.g., Supervised, Unsupervised Learning) Machine Learning Algorithms (e.g., Regression, Classification, Clustering, Decision Trees, SVMs, Neural Networks) Model Evaluation and Validation Big Data Technologies (e.g., Spark, Hadoop - conceptual understanding) Database Querying (e.g., SQL) Cloud-based Data Platforms (e.g., AWS Sagemaker, Google AI Platform, Azure ML) Ethics in Data Science and AI Person Specification: Experience supporting data More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Curo Resourcing Ltd
and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable: Jupyter Hub Awareness RabbitMQ or other common queue technology e.g. ActiveMQ NiFi Rego More ❯
None Preferred education Bachelor's Degree Required technical and professional expertise Design, develop, and maintain Java-based applications for processing and analyzing large datasets, utilizing frameworks such as ApacheHadoop, Spark, and Kafka. Collaborate with cross-functional teams to define, design, and ship data-intensive features and services. Optimize existing data processing pipelines for efficiency, scalability, and reliability. Develop … s degree in Computer Science, Information Technology, or a related field, or equivalent experience. Experience in Big Data Java development. In-depth knowledge of Big Data frameworks, such as Hadoop, Spark, and Kafka, with a strong emphasis on Java development. Proficiency in data modeling, ETL processes, and data warehousing concepts. Experience with data processing languages like Scala, Python, or More ❯