Bristol, England, United Kingdom Hybrid / WFH Options
Leonardo
government, defence, or highly regulated industries with knowledge of relevant standards Experience with additional data processing and ETL tools like Apache Kafka, Spark, or Hadoop Familiarity with containerization and orchestration tools such as Docker and Kubernetes Experience with monitoring and alerting tools such as Prometheus, Grafana, or ELK for More ❯
HDInsight, and Azure Machine Learning Studio. Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: Apache NiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data Factory More ❯
London, England, United Kingdom Hybrid / WFH Options
Endava
compliance (GDPR). Document data lineage and recommend improvements for data ownership and stewardship. Qualifications Programming: Python, SQL, Scala, Java. Big Data: Apache Spark, Hadoop, Databricks, Snowflake, etc. Cloud: AWS (Glue, Redshift), Azure (Synapse, Data Factory, Fabric), GCP (BigQuery, Dataflow). Data Modelling & Storage: Relational (PostgreSQL, SQL Server), NoSQL More ❯
MongoDB, Cassandra). In-depth knowledge of data warehousing concepts and tools (e.g., Redshift, Snowflake, Google BigQuery). Experience with big data platforms (e.g., Hadoop, Spark, Kafka). Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Google Cloud). Expertise in ETL tools and processes (e.g. More ❯
to present complex technical concepts to non-technical audiences clearly and effectively. Big Data Technologies : Familiarity with big data frameworks such as Apache Spark, Hadoop, or distributed computing concepts for processing large datasets. Cloud Computing and Infrastructure: Proficient in cloud platforms (e.g., AWS, Google Cloud, Azure) for data storage More ❯
MS SQL Server or PostgreSQL Familiarity with platforms like Databricks and Snowflake for data engineering and analytics Experience working with Big Data technologies (e.g., Hadoop, Apache Spark) Familiarity with NoSQL databases (e.g., columnar or graph databases like Cassandra, Neo4j) Research experience with peer-reviewed publications Certifications in cloud-based More ❯
Experience with ETL processes and tools. Knowledge of cloud platforms (e.g., GCP, AWS, Azure) and their data services. Familiarity with big data technologies (e.g., Hadoop, Spark) is a plus. Understanding of AI tools like Gemini and ChatGPT is also a plus. Excellent problem-solving and communication skills. Ability to More ❯
of data warehouse operations, schema evolution, indexing, partitioning. Experience with Terraform or CloudFormation. Understanding of ML workflows and hardware considerations. Experience with Spark, Flink, Hadoop, Beam. Familiarity with Databricks and Lakehouse architecture. Knowledge of data quality and lineage frameworks. Understanding of data security, privacy, GDPR. Experience with real-time More ❯
Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, DataStage or Alteryx. Project experience using the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building and leadership skills. You must be: Willing to work on client sites More ❯
London, England, United Kingdom Hybrid / WFH Options
Rein-Ton
validate solutions for quality assurance. Qualifications: Proven experience as a Data Engineer, especially with data pipelines. Proficiency in Python, Java, or Scala; experience with Hadoop, Spark, Kafka. Experience with Databricks, Azure AI Services, and cloud platforms (AWS, Google Cloud, Azure). Strong SQL and NoSQL database skills. Problem-solving More ❯
City Of Bristol, England, United Kingdom Hybrid / WFH Options
ADLIB Recruitment | B Corp™
complex data concepts to cross-functional teams Bonus points for experience with: DevOps tools like Docker, Kubernetes, CI/CD Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape storage More ❯
bath, south west england, united kingdom Hybrid / WFH Options
ADLIB Recruitment | B Corp™
complex data concepts to cross-functional teams Bonus points for experience with: DevOps tools like Docker, Kubernetes, CI/CD Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape storage More ❯
bradley stoke, south west england, united kingdom Hybrid / WFH Options
ADLIB Recruitment | B Corp™
complex data concepts to cross-functional teams Bonus points for experience with: DevOps tools like Docker, Kubernetes, CI/CD Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape storage More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
ADLIB Recruitment
complex data concepts to cross-functional teams Bonus points for experience with: DevOps tools like Docker, Kubernetes, CI/CD Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape storage More ❯
London, England, United Kingdom Hybrid / WFH Options
Rein-Ton
Proven experience as a Data Engineer with a strong background in data pipelines. Proficiency in Python, Java, or Scala, and big data technologies (e.g., Hadoop, Spark, Kafka). Experience with Databricks, Azure AI Services, and cloud platforms (AWS, Google Cloud, Azure). Solid understanding of SQL and NoSQL databases. More ❯
with SQL and database management systems (e.g., PostgreSQL, MySQL). Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data tools (e.g., Spark, Hadoop) is a plus. Prior experience in financial data analysis is highly preferred. Understanding financial datasets, metrics, and industry trends. Preferred Qualifications: Experience with API More ❯
platforms (AWS, GCP, Azure) and cloud services for data storage, processing, and analytics. Nice-to-Have Requirements: Familiarity with big data technologies such as Hadoop, Kafka, and similar tools. Familiarity with version control systems like Git. Experience with GraphDB, MongoDB, SQL/NoSQL, MS Access, and other databases. Ability More ❯
and other programming skills (Spark/Scala desirable). Experience both using and building APIs. Strong SQL background. Exposure to big data technologies (Spark, Hadoop, Presto, etc.). Works well collaboratively, and independently, with a proven ability to form and manage strong relationships within the organisation and clients. Ability More ❯
London, England, United Kingdom Hybrid / WFH Options
Rein-Ton
Proven experience as a Data Engineer with a strong background in data pipelines. Proficiency in Python, Java, or Scala, and big data technologies (e.g., Hadoop, Spark, Kafka). Experience with Databricks, Azure AI Services, and cloud platforms (AWS, Google Cloud, Azure). Solid understanding of SQL and NoSQL databases. More ❯
platforms (AWS, GCP, Azure) and cloud services for data storage, processing, and analytics. Nice-to-Have Requirements Familiarity with big data technologies such as Hadoop, Kafka, and similar tools. Familiarity with version control systems like Git. Experience with GraphDB, MongoDB, SQL/NoSQL, MS Access, and other databases. Ability More ❯
Stafford, England, United Kingdom Hybrid / WFH Options
Energy Job Search
platforms (AWS, GCP, Azure) and cloud services for data storage, processing, and analytics. Nice-to-Have Requirements Familiarity with big data technologies such as Hadoop, Kafka, and similar tools. Familiarity with version control systems like Git. Experience with GraphDB, MongoDB, SQL/NoSQL, MS Access, and other databases. Ability More ❯
engineering, data analytics, or data science, with the ability to work effectively with various data types and sources. Experience using big data technologies (e.g. Hadoop, Spark, Hive) and database management systems (e.g. SQL and NoSQL). Graph Database Expertise : Deep understanding of graph database concepts, data modeling, and query More ❯
sponsor visas. Preferred Skills and Experience Public sector experience Knowledge of cloud platforms (IBM Cloud, AWS, Azure) Experience with big data frameworks (Apache Spark, Hadoop) Data warehousing and BI tools (IBM Cognos, Tableau) Additional Details Seniority level: Mid-Senior level Employment type: Full-time Job function: Information Technology Industries More ❯
experience in data engineering, including working with AWS services. Proficiency in AWS services like S3, Glue, Redshift, Lambda, and EMR. Knowledge of Cloudera-based Hadoop is a plus. Strong ETL development skills and experience with data integration tools. Knowledge of data modeling, data warehousing, and data transformation techniques. Familiarity More ❯
NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS More ❯