implementation of modern data architectures (ideally Azure, AWS. Microsoft Fabric, GCP, Data Factory) and modern data warehouse technologies (Snowflake, Databricks) Experience with database technologies such as SQL, NoSQL, Oracle, Hadoop, or Teradata Ability to collaborate within and across teams of different technical knowledge to support delivery and educate end users on data products Expert problem-solving skills, including debugging More ❯
data architecture, integration, governance frameworks, and privacy-enhancing technologies Experience with databases (SQL & NoSQL - Oracle, PostgreSQL, MongoDB), data warehousing, and ETL/ELT tools Familiarity with big data technologies (Hadoop, Spark, Kafka), cloud platforms (AWS, Azure, GCP), and API integrations Desirable: Data certifications (TOGAF, DAMA), government/foundational data experience, cloud-native platforms knowledge, AI/ML data requirements More ❯
Portsmouth, Hampshire, England, United Kingdom Hybrid / WFH Options
Computappoint
with Trino/Starburst Enterprise/Galaxy administration and CLI operations Container Orchestration : Proven track record with Kubernetes/OpenShift in production environments Big Data Ecosystem : Strong background in Hadoop, Hive, Spark, and cloud platforms (AWS/Azure/GCP) Systems Architecture : Understanding of distributed systems, high availability, and fault-tolerant design Security Protocols : Experience with LDAP, Active Directory More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Advanced Resource Managers Limited
work experience). Proven experience with Trino/Starburst Enterprise/Galaxy administration/CLI. Implementation experience with container orchestration solutions (Kubernetes/OpenShift). Knowledge of Big Data (Hadoop/Hive/Spark) and Cloud technologies (AWS, Azure, GCP). Understanding of distributed system architecture, high availability, scalability, and fault tolerance. Familiarity with security authentication systems such as More ❯
insights Develop and implement data models, databases, and analytics solutions Build and maintain dashboards and reports using Power BI/Tableau Work with SQL, Python, and distributed computing tools (Hadoop, Spark, etc.) Support data governance and optimisation initiatives Collaborate with leadership teams to prioritise and deliver data-driven solutions What we’re looking for: 5+ years’ experience as a More ❯
insights Develop and implement data models, databases, and analytics solutions Build and maintain dashboards and reports using Power BI/Tableau Work with SQL, Python, and distributed computing tools (Hadoop, Spark, etc.) Support data governance and optimisation initiatives Collaborate with leadership teams to prioritise and deliver data-driven solutions What we’re looking for: 5+ years’ experience as a More ❯
leadership experience Proficiency in modern data platforms (Databricks, Snowflake, Kafka), container orchestration (Kubernetes/OpenShift), and multi-cloud deployments across AWS, Azure, GCP Advanced knowledge of Big Data ecosystems (Hadoop/Hive/Spark), data lakehouse architectures, mesh topologies, and real-time streaming platforms Strong Unix/Linux skills, database connectivity (JDBC/ODBC), authentication systems (LDAP, Active Directory More ❯
Familiarity with and experience of using UNIX Knowledge of CI toolsets Good client facing skills and problem solving aptitude DevOps knowledge of SQL Oracle DB Postgres ActiveMQ Zabbix Ambari Hadoop Jira Confluence BitBucket ActiviBPM Oracle SOA Azure SQLServer IIS AWS Grafana Oracle BPM Jenkins Puppet CI and other cloud technologies. Your Skills: Excellent attention to detail and ability to More ❯
Stevenage, Hertfordshire, South East, United Kingdom
Anson Mccade
Neo4J, InfluxDB). Hands-on experience with data processing and integration tools , including ETL, ESB, and APIs. Proficiency in Python or similar programming languages. Exposure to big data technologies (Hadoop or similar frameworks). Familiarity with Generative AI, NLP, or OCR applications is highly desirable. Previous experience in the industrial or defence sector is advantageous. Salary & Working Model More ❯
Stevenage, Hertfordshire, South East, United Kingdom Hybrid / WFH Options
Henderson Scott
with SQL & NoSQL databases (e.g. MS SQL, MongoDB, Neo4J) Python skills for scripting and automation ETL and data exchange experience (e.g. APIs, ESB tools) Knowledge of Big Data (e.g. Hadoop) Curiosity about AI, particularly NLP, OCR or Generative AI ?? Bonus Points For Docker/containerisation experience Any previous work in industrial, aerospace or secure environments Exposure to tools like More ❯
Star Schema, Data Vault). Proficiency with cloud-native and DataOps solutions (Azure/AWS stack, event streaming with Azure Event Hubs, Kafka). Experience in Big Data solutions (Hadoop, Cassandra). Understanding of compliance frameworks (e.g., GDPR, ISO 22701 ) and industry methodologies (e.g., TOGAF, DAMA ). Skilled in architecture and design tools (Visio, Draw.io, Archi, SparxEA). General More ❯
Star Schema, Data Vault). Proficiency with cloud-native and DataOps solutions (Azure/AWS stack, event streaming with Azure Event Hubs, Kafka). Experience in Big Data solutions (Hadoop, Cassandra). Understanding of compliance frameworks (e.g., GDPR, ISO 22701 ) and industry methodologies (e.g., TOGAF, DAMA ). Skilled in architecture and design tools (Visio, Draw.io, Archi, SparxEA). General More ❯
london (city of london), south east england, united kingdom
Experis
Star Schema, Data Vault). Proficiency with cloud-native and DataOps solutions (Azure/AWS stack, event streaming with Azure Event Hubs, Kafka). Experience in Big Data solutions (Hadoop, Cassandra). Understanding of compliance frameworks (e.g., GDPR, ISO 22701 ) and industry methodologies (e.g., TOGAF, DAMA ). Skilled in architecture and design tools (Visio, Draw.io, Archi, SparxEA). General More ❯
not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯
not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯
london (city of london), south east england, united kingdom
Ubique Systems
not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯
Milton Keynes, Buckinghamshire, England, United Kingdom Hybrid / WFH Options
Lorien
banking-focused data model. Liaise with IT teams to transition data models into production environments. Conduct data mining and exploratory data analysis to support model development. Apply strong SQL, Hadoop, and cloud-based data processing skills to manage and analyse large datasets. Support the design and structure of data models, with a working understanding of data modelling principles. Present … scalable data solutions within a cloud architecture. Key Skills & Experience: Proven experience as a technical data analyst or data engineer in a project-focused environment. Strong proficiency in SQL, Hadoop, and cloud platforms (preferably AWS). Experience with data mining, data modelling, and large-scale data processing. Familiarity with tools such as Python, R, and Power BI. Understanding of More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Mexa Solutions LTD
data sources, including SQL and NoSQL databases. Implementing and optimizing data warehouse solutions and ETL/ELT pipelines for analytics and reporting. Working with big data ecosystems such as Hadoop, Spark, and Kafka to build scalable solutions. What you’ll bring... Strong expertise in SQL and NoSQL technologies, such as Oracle, PostgreSQL, MongoDB, or similar. Proven experience with data … warehousing concepts and ETL/ELT tools. Knowledge of big data platforms and streaming tools like Hadoop, Spark, and Kafka. A deep understanding of scalable data architectures, including high availability and fault tolerance. Experience working across hybrid or cloud environments. Excellent communication skills to engage both technical teams and senior stakeholders. What’s in it for you... This is More ❯