skills. Experience working in BFSI or enterprise-scale environments is a plus. Preferred: Exposure to cloud platforms (AWS, Azure, GCP) and their data services. Knowledge of Big Data platforms (Hadoop, Spark, Snowflake, Databricks). Familiarity with data governance and data catalog tools. More ❯
vision. Hands-on with data engineering, model deployment (MLOps), and cloud platforms (AWS, Azure, GCP). Strong problem-solving, algorithmic, and analytical skills. Knowledge of big data tools (Spark, Hadoop) is a plus. More ❯
with Spark. Experience building, maintaining, and debugging DBT pipelines. Strong proficiency in developing, monitoring, and debugging ETL jobs. Deep understanding of SQL and experience with Databricks, Snowflake, BigQuery, Azure, Hadoop, or CDP environments. Hands-on technical support experience, including escalation management and adherence to SLAs. Familiarity with CI/CD technologies and version control systems like Git. Expertise in More ❯
Azure Data Factory, Azure Functions, and Synapse Analytics. Proficient in Python and advanced SQL, including query tuning and optimisation. Hands-on experience with big data tools such as Spark, Hadoop, and Kafka. Familiarity with CI/CD pipelines, version control, and deployment automation. Experience using Infrastructure as Code tools like Terraform. Solid understanding of Azure-based networking and cloud More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Randstad Technologies
Advert Hadoop Engineer 6 Months Contract Remote working £300 to £350 a day A top timer global consultancy firm is looking for an experienced Hadoop Engineer to join their team and contribute to large big data projects. The position requires a professional with a strong background in developing and managing scalable data pipelines, specifically using the Hadoop ecosystem and related tools. The role will focus on designing, building and maintaining scalable data pipelines using big data hadoop ecosystems and apache spark for large datasets. A key responsibility is to analyse infrastructure logs and operational data to derive insights, demonstrating a strong understanding of both data processing and the underlying systems. The successful candidate should have … for Scripting Apache Spark Prior experience of building ETL pipelines Data Modelling 6 Months Contract - Remote Working - £300 to £350 a day Inside IR35 If you are an experienced Hadoop engineer looking for a new role then this is the perfect opportunity for you. If the above seems of interest to you then please apply directly to the AD More ❯