of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and More ❯
pipelines and ETL - informatica - Experience in SQL and database management systems - Knowledge of data modelling , warehousing concepts , and ETL processes - Experience with big data technologies and frameworks such as Hadoop, Hive, Spark. Programming experience in Python or Scala. - Demonstrated analytical and problem-solving skills. - Familiarity with cloud platforms (e.g Azure , AWS ) and their data related services - Proactive and detail More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
both relational databases (SQL Server, MySQL) and NoSQL solutions (MongoDB, Cassandra) Hands-on knowledge of AWS S3 and associated big data services Extensive experience with big data technologies including Hadoop and Spark for large-scale dataset processing Deep understanding of data security frameworks, encryption protocols, access management and regulatory compliance Proven track record building automated, scalable ETL frameworks and More ❯
data architecture, integration, governance frameworks, and privacy-enhancing technologies Experience with databases (SQL & NoSQL - Oracle, PostgreSQL, MongoDB), data warehousing, and ETL/ELT tools Familiarity with big data technologies (Hadoop, Spark, Kafka), cloud platforms (AWS, Azure, GCP), and API integrations Desirable: Data certifications (TOGAF, DAMA), government/foundational data experience, cloud-native platforms knowledge, AI/ML data requirements More ❯
Portsmouth, Hampshire, England, United Kingdom Hybrid / WFH Options
Computappoint
with Trino/Starburst Enterprise/Galaxy administration and CLI operations Container Orchestration : Proven track record with Kubernetes/OpenShift in production environments Big Data Ecosystem : Strong background in Hadoop, Hive, Spark, and cloud platforms (AWS/Azure/GCP) Systems Architecture : Understanding of distributed systems, high availability, and fault-tolerant design Security Protocols : Experience with LDAP, Active Directory More ❯
work experience). Proven experience with Trino/Starburst Enterprise/Galaxy administration/CLI. Implementation experience with container orchestration solutions (Kubernetes/OpenShift). Knowledge of Big Data (Hadoop/Hive/Spark) and Cloud technologies (AWS, Azure, GCP). Understanding of distributed system architecture, high availability, scalability, and fault tolerance. Familiarity with security authentication systems such as More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Advanced Resource Managers Limited
work experience). Proven experience with Trino/Starburst Enterprise/Galaxy administration/CLI. Implementation experience with container orchestration solutions (Kubernetes/OpenShift). Knowledge of Big Data (Hadoop/Hive/Spark) and Cloud technologies (AWS, Azure, GCP). Understanding of distributed system architecture, high availability, scalability, and fault tolerance. Familiarity with security authentication systems such as More ❯
leadership experience Proficiency in modern data platforms (Databricks, Snowflake, Kafka), container orchestration (Kubernetes/OpenShift), and multi-cloud deployments across AWS, Azure, GCP Advanced knowledge of Big Data ecosystems (Hadoop/Hive/Spark), data lakehouse architectures, mesh topologies, and real-time streaming platforms Strong Unix/Linux skills, database connectivity (JDBC/ODBC), authentication systems (LDAP, Active Directory More ❯
Telegraf and others. Knowledge and demonstrable hands-on experience with middleware technologies (Kafka, API gateways and others) and Data Engineering tools/frameworks like Apache Spark, Airflow, Flink and Hadoop ecosystems. Understanding of network technology fundamentals, Data Structures, scalable system design and ability to translate information in a structured manner for wider product and engineering teams to translate into More ❯
Telegraf and others. Knowledge and demonstrable hands-on experience with middleware technologies (Kafka, API gateways and others) and Data Engineering tools/frameworks like Apache Spark, Airflow, Flink and Hadoop ecosystems. Understanding of network technology fundamentals, Data Structures, scalable system design and ability to translate information in a structured manner for wider product and engineering teams to translate into More ❯
Telegraf and others. Knowledge and demonstrable hands-on experience with middleware technologies (Kafka, API gateways and others) and Data Engineering tools/frameworks like Apache Spark, Airflow, Flink and Hadoop ecosystems. Understanding of network technology fundamentals, Data Structures, scalable system design and ability to translate information in a structured manner for wider product and engineering teams to translate into More ❯
as well as programming languages such as Python, R, or similar. Strong experience with machine learning frameworks (e.g., TensorFlow, Scikit-learn) as well as familiarity with data technologies (e.g., Hadoop, Spark). About Vixio: Our mission is to empower businesses to efficiently manage and meet their regulatory obligations with our unique combination of human expertise and Regulatory Technology (RegTech More ❯
right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: Skills Desired ApacheHadoop, Applied Mathematics, Big Data, Curiosity, Data Governance, Data Literacy, Data Management, Data Quality, Data Science, Data Strategy, Data Visualization, Deep Learning, Machine Learning (Ml), Machine Learning Algorithms, Master Data More ❯
and social benefits (e.g. UK pension schema) What do you offer? Strong hands-on experience working with modern Big Data technologies such as Apache Spark, Trino, Apache Kafka, ApacheHadoop, Apache HBase, Apache Nifi, Apache Airflow, Opensearch Proficiency in cloud-native technologies such as containerization and Kubernetes Strong knowledge of DevOps tools (Terraform, Ansible, ArgoCD, GitOps, etc.) Proficiency in More ❯
Stevenage, Hertfordshire, South East, United Kingdom
Anson Mccade
Neo4J, InfluxDB). Hands-on experience with data processing and integration tools , including ETL, ESB, and APIs. Proficiency in Python or similar programming languages. Exposure to big data technologies (Hadoop or similar frameworks). Familiarity with Generative AI, NLP, or OCR applications is highly desirable. Previous experience in the industrial or defence sector is advantageous. Salary & Working Model More ❯
Stevenage, Hertfordshire, South East, United Kingdom Hybrid / WFH Options
Henderson Scott
with SQL & NoSQL databases (e.g. MS SQL, MongoDB, Neo4J) Python skills for scripting and automation ETL and data exchange experience (e.g. APIs, ESB tools) Knowledge of Big Data (e.g. Hadoop) Curiosity about AI, particularly NLP, OCR or Generative AI ?? Bonus Points For Docker/containerisation experience Any previous work in industrial, aerospace or secure environments Exposure to tools like More ❯
Agile working practices CI/CD tooling Scripting experience (Python, Perl, Bash, etc.) ELK (Elastic stack) JavaScript Cypress Linux experience Search engine technology (e.g., Elasticsearch) Big Data Technology experience (Hadoop, Spar k , Kafka, etc.) Microservice and cloud native architecture Desirable: Able to demonstrate experience of troubleshooting and diagnosis of technical issues. Able to demonstrate excellent team-working skills. Strong More ❯
in applied research PREFERRED QUALIFICATIONS Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. PhD Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your More ❯
None Preferred education Bachelor's Degree Required technical and professional expertise Design, develop, and maintain Java-based applications for processing and analyzing large datasets, utilizing frameworks such as ApacheHadoop, Spark, and Kafka. Collaborate with cross-functional teams to define, design, and ship data-intensive features and services. Optimize existing data processing pipelines for efficiency, scalability, and reliability. Develop … s degree in Computer Science, Information Technology, or a related field, or equivalent experience. Experience in Big Data Java development. In-depth knowledge of Big Data frameworks, such as Hadoop, Spark, and Kafka, with a strong emphasis on Java development. Proficiency in data modeling, ETL processes, and data warehousing concepts. Experience with data processing languages like Scala, Python, or More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Mexa Solutions LTD
data sources, including SQL and NoSQL databases. Implementing and optimizing data warehouse solutions and ETL/ELT pipelines for analytics and reporting. Working with big data ecosystems such as Hadoop, Spark, and Kafka to build scalable solutions. What you’ll bring... Strong expertise in SQL and NoSQL technologies, such as Oracle, PostgreSQL, MongoDB, or similar. Proven experience with data … warehousing concepts and ETL/ELT tools. Knowledge of big data platforms and streaming tools like Hadoop, Spark, and Kafka. A deep understanding of scalable data architectures, including high availability and fault tolerance. Experience working across hybrid or cloud environments. Excellent communication skills to engage both technical teams and senior stakeholders. What’s in it for you... This is More ❯
About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more. More ❯
nature of the work, you must hold enhanced DV Clearance. WE NEED THE DATA ENGINEER TO HAVE.... Current enhanced DV Security Clearance Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test and integration of …/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH/LEAD BIG DATA ENGINEER/LEAD BIG DATA DEVELOPER More ❯
Data Solutions in Mission-Critical areas. WE NEED THE BIG DATA ENGINEER TO HAVE.... Current DV clearance - Standard or Enhanced Must have experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience with Palantir Foundry is preferred but not essential Experience working in an Agile Scrum environment Experience in design, development, test and integration of software IT …/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH More ❯
/MOD or Enhanced DV Clearance. WE NEED THE PYTHON/DATA ENGINEER TO HAVE.... Current DV Security Clearance (Standard or Enhanced) Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Python/PySpark experience Experience With Palantir Foundry is nice to have Experience working in an Agile Scrum environment with tools such as Confluence/Jira …/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH/LEAD BIG DATA ENGINEER/LEAD BIG DATA DEVELOPER More ❯