skills. Experience working in BFSI or enterprise-scale environments is a plus. Preferred: Exposure to cloud platforms (AWS, Azure, GCP) and their data services. Knowledge of Big Data platforms (Hadoop, Spark, Snowflake, Databricks). Familiarity with data governance and data catalog tools. More ❯
vision. Hands-on with data engineering, model deployment (MLOps), and cloud platforms (AWS, Azure, GCP). Strong problem-solving, algorithmic, and analytical skills. Knowledge of big data tools (Spark, Hadoop) is a plus. More ❯
with Spark. Experience building, maintaining, and debugging DBT pipelines. Strong proficiency in developing, monitoring, and debugging ETL jobs. Deep understanding of SQL and experience with Databricks, Snowflake, BigQuery, Azure, Hadoop, or CDP environments. Hands-on technical support experience, including escalation management and adherence to SLAs. Familiarity with CI/CD technologies and version control systems like Git. Expertise in More ❯
Azure Data Factory, Azure Functions, and Synapse Analytics. Proficient in Python and advanced SQL, including query tuning and optimisation. Hands-on experience with big data tools such as Spark, Hadoop, and Kafka. Familiarity with CI/CD pipelines, version control, and deployment automation. Experience using Infrastructure as Code tools like Terraform. Solid understanding of Azure-based networking and cloud More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Experis
Using machine learning tools to select features, create and optimize classifiers Qualifications: Programming Skills - knowledge of statistical programming languages like python, and database query languages like SQL, Hive/Hadoop, Pig is desirable. Familiarity with Scala and java is an added advantage. Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators, etc. Proficiency More ❯
in software development with at least 2 server-side languages - Java being must have Proven experience with microservices architecture and scalable, distributed systems. Proficient in data technologies like MySQL , Hadoop , or Cassandra . Experience with batch processing , data pipelines , and data integrity practices. Familiarity with AWS services (e.g., RDS, Step Functions, EC2, Kinesis) is a plus. Solid understanding of More ❯
West Midlands, United Kingdom Hybrid / WFH Options
Experis
Role Title: Hadoop Engineer/ODP Platform Location: Birmingham/Sheffield - Hybrid working with 3 days onsite per week End Date: 28/11/2025 Role Overview: We are seeking a highly skilled Hadoop Engineer to support and enhance our Operational Data Platform (ODP) deployed in an on-premises environment. The ideal candidate will have extensive experience … in the Hadoop ecosystem, strong programming skills, and a solid understanding of infrastructure-level data analytics. This role focuses on building and maintaining scalable, secure, and high-performance data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache … and troubleshoot data jobs, ensuring reliability and performance across the platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Randstad Technologies
Advert Hadoop Engineer 6 Months Contract Remote working £300 to £350 a day A top timer global consultancy firm is looking for an experienced Hadoop Engineer to join their team and contribute to large big data projects. The position requires a professional with a strong background in developing and managing scalable data pipelines, specifically using the Hadoop ecosystem and related tools. The role will focus on designing, building and maintaining scalable data pipelines using big data hadoop ecosystems and apache spark for large datasets. A key responsibility is to analyse infrastructure logs and operational data to derive insights, demonstrating a strong understanding of both data processing and the underlying systems. The successful candidate should have … for Scripting Apache Spark Prior experience of building ETL pipelines Data Modelling 6 Months Contract - Remote Working - £300 to £350 a day Inside IR35 If you are an experienced Hadoop engineer looking for a new role then this is the perfect opportunity for you. If the above seems of interest to you then please apply directly to the AD More ❯
Java Engineer (Java, MySQL, Hadoop, Cassandra, RDS & EC2) Location: London Contract Duration: 4 months *Urgent 4-Month Contract - Role hiring now!* This is a *London* based role with an excellent immediate start within a Global T echnology Client that is working on *designing and building scalable, high-performance systems that support compliance, risk management, and regulatory processes*. About … Enhance tooling, analytics, and reporting for compliance officers. Skills & Experience 3+ years software development experience (Java required). Strong experience with microservices and distributed systems . Proficiency with MySQL, Hadoop, Cassandra . Knowledge of AWS services (RDS, EC2, Step Functions, Kinesis) is a plus. Familiarity with testing (unit, integration, end-to-end). Background in compliance, payments, or FinTech More ❯