London, South East, England, United Kingdom Hybrid / WFH Options
Randstad Technologies
scalable data pipelines, specifically using the Hadoop ecosystem and related tools. The role will focus on designing, building and maintaining scalable data pipelines using big data hadoop ecosystems and apachespark for large datasets. A key responsibility is to analyse infrastructure logs and operational data to derive insights, demonstrating a strong understanding of both data processing and the … underlying systems. The successful candidate should have the following key skills Experience with Open Data Platform Hands on experience with Python for Scripting ApacheSpark Prior experience of building ETL pipelines Data Modelling 6 Months Contract - Remote Working - £300 to £350 a day Inside IR35 If you are an experienced Hadoop engineer looking for a new role then More ❯
Role Title: Infrastructure/Platform Engineer - Apache Duration: 9 Months Location: Remote Rate: £ - Umbrella only Would you like to join a global leader in consulting, technology services and digital transformation? Our client is at the forefront of innovation to address the entire breadth of opportunities in the evolving world of cloud, digital and platforms. Role purpose/summary ? Refactor … prototype Spark jobs into production-quality components, ensuring scalability, test coverage, and integration readiness. ? Package Spark workloads for deployment via Docker/Kubernetes and integrate with orchestration systems (e.g., Airflow, custom schedulers). ? Work with platform engineers to embed Spark jobs into InfoSum's platform APIs and data pipelines. ? Troubleshoot job failures, memory and resource issues, and … execution anomalies across various runtime environments. ? Optimize Spark job performance and advise on best practices to reduce cloud compute and storage costs. ? Guide engineering teams on choosing the right execution strategies across AWS, GCP, and Azure. ? Provide subject matter expertise on using AWS Glue for ETL workloads and integration with S3 and other AWS-native services. ? Implement observability tooling More ❯
Experience working in BFSI or enterprise-scale environments is a plus. Preferred: Exposure to cloud platforms (AWS, Azure, GCP) and their data services. Knowledge of Big Data platforms (Hadoop, Spark, Snowflake, Databricks). Familiarity with data governance and data catalog tools. More ❯
with Azure Data Factory, Azure Functions, and Synapse Analytics. Proficient in Python and advanced SQL, including query tuning and optimisation. Hands-on experience with big data tools such as Spark, Hadoop, and Kafka. Familiarity with CI/CD pipelines, version control, and deployment automation. Experience using Infrastructure as Code tools like Terraform. Solid understanding of Azure-based networking and More ❯
computer vision. Hands-on with data engineering, model deployment (MLOps), and cloud platforms (AWS, Azure, GCP). Strong problem-solving, algorithmic, and analytical skills. Knowledge of big data tools (Spark, Hadoop) is a plus. More ❯
data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache Airflow and Spark Streaming for real-time data processing. Develop robust data engineering solutions using Python for automation and transformation. Collaborate with infrastructure and analytics teams to support … platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure to data analytics, preferably involving infrastructure or operational data. Experience More ❯
West Midlands, United Kingdom Hybrid / WFH Options
Experis
data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache Airflow and Spark Streaming for real-time data processing. Develop robust data engineering solutions using Python for automation and transformation. Collaborate with infrastructure and analytics teams to support … platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure to data analytics, preferably involving infrastructure or operational data. Experience More ❯
management disciplines, including data integration, modeling, optimisation, data quality and Master Data Management. Experience with database technologies such as RDBMS (SQL Server, Oracle) or NoSQL (MongoDB). Knowledge in Apache technologies such as Spark, Kafka and Airflow to build scalable and efficient data pipelines. Have worked on migration projects and some experience with management systems such as SAP More ❯
Employment Type: Contract
Rate: £700 - £750/day £700-750 Per Day (Inside IR35)
management disciplines, including data integration, modeling, optimisation, data quality and Master Data Management. Experience with database technologies such as RDBMS (SQL Server, Oracle) or NoSQL (MongoDB). Knowledge in Apache technologies such as Spark, Kafka and Airflow to build scalable and efficient data pipelines. Have worked on migration projects and some experience with management systems such as SAP More ❯
Data Engineer/Technical Support Engineer - Client Facing (Remote - UK) Location: 3 days per week in the office (Office in Sheffield, UK) Contract: 6-Month Contract Rate: £400 per day - Inside IR35 Role Overview: We are looking for a highly More ❯
We're undertaking a fast paced data transformation into Databricks at E.ON Next using best practice data governance and architectural principles, and we are growing our data engineering capability within the Data Team. As part of our journey we're More ❯