analytical, problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as Apache Hadoop, ApacheSpark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with More ❯
modelling, design, and integration expertise. Data Mesh Architectures: In-depth understanding of data mesh architectures. Technical Proficiency: Proficient in dbt, SQL, Python/Java, ApacheSpark, Trino, Apache Airflow, and Astro. Cloud Technologies: Awareness and experience with cloud technologies, particularly AWS. Analytical Skills: Excellent problem-solving and More ❯
years, and ability to obtain security clearance. Preferred Skills Experience with cloud platforms (IBM Cloud, AWS, Azure). Knowledge of big data frameworks (ApacheSpark, Hadoop). Experience with data warehousing tools like IBM Cognos or Tableau. Certifications in relevant technologies are a plus. Additional Details Seniority level More ❯
government security clearance. Preferred Technical And Professional Experience Experience with machine learning frameworks (TensorFlow, PyTorch, scikit-learn). Familiarity with big data technologies (Hadoop, Spark). Background in data science, IT consulting, or a related field. AWS Certified Big Data or equivalent Seniority level Seniority level Mid-Senior level More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
ADLIB Recruitment | B Corp™
translate complex data concepts to cross-functional teams Bonus points for experience with: DevOps tools like Docker, Kubernetes, CI/CD Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
ADLIB Recruitment
translate complex data concepts to cross-functional teams Bonus points for experience with: DevOps tools like Docker, Kubernetes, CI/CD Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape More ❯
Azure) · Experience with: o Data warehousing and lake architectures o ETL/ELT pipeline development o SQL and NoSQL databases o Distributed computing frameworks (Spark, Kinesis etc ) o Software development best practices including CI/CD, TDD and version control. o Containerisation tools like Docker or Kubernetes o Experience More ❯
Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: ApacheSpark and the Hadoop Ecosystem Edge technologies e.g. NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages The following Technical Skills & Experience More ❯
data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of cloud security, networking, and cost optimization as it relates to data platforms. Experience in total cost More ❯
read and write clean, efficient code beyond Jupyter notebooks, particularly in production-grade codebases. Strong skills in SQL databases, working with tools such as Spark in cloud environments. Experience working within agile teams, with a solid understanding of code reviews and source control best practices using GitHub. Excellent communication More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Leonardo
the UK’s digital landscape. This role requires strong expertise in building and managing data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi. The successful candidate will design, implement, and maintain scalable, secure data solutions, ensuring compliance with strict security standards and regulations. This is a … of compressed hours. The role will include: Design, develop, and maintain secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi. Implement data ingestion, transformation, and integration processes, ensuring data quality and security. Collaborate with data architects and security teams to ensure compliance with … Engineer in secure or regulated environments Expertise in the Elastic Stack (Elasticsearch, Logstash, Kibana) for data ingestion, transformation, indexing, and visualization Strong experience with Apache NiFi for building and managing complex data flows and integration processes Knowledge of security practices for handling sensitive data, including encryption, anonymization, and access More ❯
elevate technology and consistently apply best practices. Qualifications for Software Engineer Hands-on experience working with technologies like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume, etc. Strong DevOps focus and experience building and deploying infrastructure with cloud deployment technologies like Ansible, Chef, Puppet, etc. Experience with More ❯
on experience in tools like Snowflake, DBT, SQL Server, and programming languages such as Python, Java, or Scala. Proficient in big data tools (e.g., Spark, Kafka), cloud platforms (AWS, Azure, GCP), and embedding AI/GenAI into scalable data infrastructures. Strong stakeholder engagement and the ability to translate technical More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Ripjar
libraries such as PyTorch, scikit-learn, numpy and scipy Good communication and interpersonal skills Experience working with large-scale data processing systems such as Spark and Hadoop Experience in software development in agile environments and an understanding of the software development lifecycle Experience using or implementing ML Operations approaches More ❯
Analytics, SQL DW, and Cosmos DB. The data engineer is proficient in Azure Data Platform components, including ADLS2, Blob Storage, SQLDW, Synapse Analytics with Spark and SQL, Azure functions with Python, Azure Purview, and Cosmos DB. They are also proficient in Azure Event Hub and Streaming Analytics, Managed Streaming … for Apache Kafka, Azure DataBricks with Spark, and other open source technologies like Apache Airflow and dbt, Spark/Python, or Spark/Scala. Preferred Education Bachelor's Degree Required Technical And Professional Expertise As an equal opportunities’ employer, we welcome applications from individuals of More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Curo Resourcing Ltd
CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: ApacheSpark and the Hadoop Ecosystem Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable: Jupyter Hub Awareness More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
exchange connectivity Scripting abilities in Python, Bash, or similar languages Knowledge of monitoring tools and alerting frameworks Exposure to data technologies such as Kafka, Spark or Delta Lake is useful but not mandat Bachelor's degree in Computer Science, Engineering, or related technical field This role offers competitive compensation More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
VC Evidensia UK
leadership to teams working with neighboring technologies, most notably Microsoft’s Power Platform and Azure infrastructure. Experience across Microsoft Azure cloud platform = DataBricks/Spark The administration of database systems, primarily Microsoft SQL Server. Data Warehousing/Data Lakes/Master Data Management/Artificial Intelligence (AI)/Data More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Lloyds Bank plc
relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. … understanding of cloud storage, networking, and resource provisioning. It would be great if you had... Certification in GCP “Professional Data Engineer”. Certification in Apache Kafka (CCDAK). Proficiency across the data lifecycle. Working for us: Our focus is to ensure we are inclusive every day, building an organisation More ❯
Greater Bristol Area, United Kingdom Hybrid / WFH Options
LHH
secure or regulated environments Ingest, process, index, and visualise data using the Elastic Stack (Elasticsearch, Logstash, Kibana) Build and maintain robust data flows with Apache NiFi Implement best practices for handling sensitive data, including encryption, anonymisation, and access control Monitor and troubleshoot real-time data pipelines to ensure high … experience as a Data Engineer in secure, regulated, or mission-critical environments Proven expertise with the Elastic Stack (Elasticsearch, Logstash, Kibana) Solid experience with Apache NiFi Strong understanding of data security, governance, and compliance requirements Experience building real-time, large-scale data pipelines Working knowledge of cloud platforms (AWS … with a strong focus on data accuracy, quality, and reliability Desirable (Nice to Have): Background in defence, government, or highly regulated sectors Familiarity with Apache Kafka, Spark, or Hadoop Experience with Docker and Kubernetes Use of monitoring/alerting tools such as Prometheus, Grafana, or ELK Understanding of More ❯
relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. … understanding of cloud storage, networking and resource provisioning. It would be great if you had Certification in GCP "Professional Data Engineer". Certification in Apache Kafka (CCDAK). Proficiency across the data lifecycle. WORKING FOR US Our focus is to ensure we are inclusive every day, building an organisation More ❯
relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ) Proficiency in infrastructure as code (IaC) using Terraform Experience with CI/CD pipelines and related tools/frameworks Containerisation Good … Good understating of cloud storage, networking and resource provisioning It would be great if you had... Certification in GCP "Professional Data Engineer" Certification in Apache Kafka (CCDAK) Proficiency across the data lifecycle WORKING FOR US Our focus is to ensure we are inclusive every day, building an organisation that More ❯
learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks … analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Data engineering approaches; Database management, e.g. MySQL, Postgress; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity More ❯
Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow … etc. Experience with AWS cloud services: EC2, EMR, RDS, Redshift. Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. Salary: 30000 per annum + benefits Apply For This Job If you would like to apply More ❯