About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more. More ❯
possible. Join us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. YOUR ROLE Capgemini is looking for Hadoop Data Engineer.A Hadoop Data Engineer in the Financial Services (FS) sector is needed. This role focuses on building and maintaining scalable data systems for financial data analysis and … reporting, often involving expertise in Hadoop, Spark, and related technologies YOUR PROFILE Expertise on Hadoop, Spark & Scala Experience in developing complex data transformation workflows(ETL) using Big Data Technologies Good expertise on HIVE, Impala, HBase Hands on experience to finetune Spark jobs Experience with Java and distributed computing ABOUT CAPGEMINI Capgemini is a global business and technology transformation More ❯
to technical requirements and implementation. Experience of Big Data technologies/Big Data Analytics. C++, Java, Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for More ❯
Linux, GitHub, Continuous Integration, Cloud technologies, Virtualisation Tools, Monitoring utilities, Disaster recovery process/tools Experience in troubleshooting and problem resolution Experience in System Integration Knowledge of the following: Hadoop, Flume, Sqoop, Map Reduce, Hive/Impala, Hbase, Kafka, Spark Streaming Experience of ETL tools incorporating Big Data Shell Scripting, Python Beneficial Skills: Understanding of: LAN, WAN, VPN and … SD Networks Hardware and Cabling set-up experience Experience of implementing and supporting Big Data analytics platforms built on top of Hadoop Knowledge and appreciation of Information security If you are looking for a challenging role in an exciting environment, then please do not hesitate to apply More ❯
External Description Reach beyond with Liberty IT; for this is where you'll find the super challenges, where you'll be given the scope and the support to go further, dig deeper and fly higher. We won't stand over More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Randstad Technologies
Advert Hadoop Engineer 6 Months Contract Remote working £300 to £350 a day A top timer global consultancy firm is looking for an experienced Hadoop Engineer to join their team and contribute to large big data projects. The position requires a professional with a strong background in developing and managing scalable data pipelines, specifically using the Hadoop ecosystem and related tools. The role will focus on designing, building and maintaining scalable data pipelines using big data hadoop ecosystems and apache spark for large datasets. A key responsibility is to analyse infrastructure logs and operational data to derive insights, demonstrating a strong understanding of both data processing and the underlying systems. The successful candidate should have … for Scripting Apache Spark Prior experience of building ETL pipelines Data Modelling 6 Months Contract - Remote Working - £300 to £350 a day Inside IR35 If you are an experienced Hadoop engineer looking for a new role then this is the perfect opportunity for you. If the above seems of interest to you then please apply directly to the AD More ❯
data processing and predictive analytics. Responsibilities Develop and implement machine learning models using Spark ML for predictive analytics. Design and optimize training and inference pipelines for distributed systems (e.g., Hadoop). Process and analyze large-scale datasets to extract meaningful insights and features. Collaborate with data engineers to ensure seamless integration of ML workflows with data pipelines. Evaluate model … technologies. Requirements: Proficiency in Apache Spark and Spark MLlib for machine learning tasks. Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering). Experience with distributed systems like Hadoop for data storage and processing. Proficiency in Python, Scala, or Java for ML development. Familiarity with data preprocessing techniques and feature engineering. Knowledge of model evaluation metrics and techniques. More ❯
data processing and predictive analytics. Role: Develop and implement machine learning models using Spark ML for predictive analytics Design and optimise training and inference pipelines for distributed systems (e.g., Hadoop) Process and analyse large-scale datasets to extract meaningful insights and features Collaborate with data engineers to ensure seamless integration of ML workflows with data pipelines Evaluate model performance … computing technologies Experience: Proficiency in Apache Spark and Spark MLlib for machine learning tasks Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering) Experience with distributed systems like Hadoop for data storage and processing Proficiency in Python, Scala, or Java for ML development Familiarity with data preprocessing techniques and feature engineering Knowledge of model evaluation metrics and techniques More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
McGregor Boyall
work across regulatory and transformation initiatives that span multiple trading desks, functions, and stakeholders. You'll build PySpark and SQL queries to interrogate, reconcile and analyse data, contribute to Hadoop data architecture discussions, and help improve reporting processes and data quality. You'll be hands-on across technical delivery, documentation, testing, and stakeholder engagement. It's a technically rich … high-impact project work at one of the world's most complex financial institutions. Key Skills: Strong hands-on experience with SQL, Python, Spark Background in Big Data/Hadoop environments Solid understanding of ETL/Data Warehousing concepts Strong communicator, with the ability to explain technical concepts to senior stakeholders Details: Location: Belfast - 3 days/week onsite More ❯
Growth Revenue Management, Marketing Analytics, CLM/CRM Analytics and/or Risk Analytics. Conduct analyses in typical analytical tools ranging from SAS, SPSS, Eviews, R, Python, SQL, Teradata, Hadoop, Access, Excel, etc. Communicate analyses via compelling presentations. Solve problems, disaggregate issues, develop hypotheses and develop actionable recommendations from data and analysis. Prepare and facilitating workshops. Manage stakeholders and … An ability to think analytically, decompose problem sets, develop hypotheses and recommendations from data analysis. Strong technical skills regarding data analysis, statistics, and programming. Strong working knowledge of, Python, Hadoop, SQL, and/or R. Working knowledge of Python data tools (e.g. Jupyter, Pandas, Scikit-Learn, Matplotlib). Ability to talk the language of statistics, finance, and economics a More ❯
users or large data sets with 10M+ database records. This is a very Big Data platform. Experience building REST services (orchestration layer) on CRUD data services based on Cloudera Hadoop stack, with an emphasis on performance optimization. Understanding how to secure data in a REST architecture. Knowledge of scaling web applications, including load balancing, caching, indexing, normalization, etc. Proficiency … in Java/Spring web application development. Experience with Test Driven Development and Agile methodologies; Behavior Driven Development is a plus. Knowledge of Hadoop, Big Data, Hive, Pig, NoSQL is a plus, though most engineers with this background may have limited REST experience. Additional Information All your information will be kept confidential according to EEO guidelines. Direct Staffing Inc More ❯
/MOD or Enhanced DV Clearance. WE NEED THE PYTHON/DATA ENGINEER TO HAVE. Current DV Security Clearance (Standard or Enhanced) Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Python/PySpark experience Experience With Palantir Foundry is nice to have Experience working in an Agile Scrum environment with tools such as Confluence/Jira …/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH/LEAD BIG DATA ENGINEER/LEAD BIG DATA DEVELOPER More ❯
Financial Services, Manufacturing, Life Sciences and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13+ billion. Location - London Skill - ApacheHadoop We are looking for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should have experience in Cloudera or … hybrid cloud environment Ability to do debug & fix code in the open source Apache code and should be an individual contributor to open source projects. Job description: The ApacheHadoop project requires up to 3 individuals with experience in designing and building platforms, and supporting applications both in cloud environments and on-premises. These resources are expected to be … to support all developers in migrating and debugging various RiskFinder critical applications. They need to be "Developers" who are expert in designing and building Big Data platforms using ApacheHadoop and support ApacheHadoop implementations both in cloud environments and on-premises. More ❯
Analyst, Global Network & Optimization Solutions • Conduct data analysis for customer engagements and strategic initiatives across Acceptance Solutions (pull data, interpret insights, make recommendations). • Serve as first line support to our sellers and customers during deal cycles. • Collaborate with the More ❯
West Midlands, United Kingdom Hybrid / WFH Options
Experis
Role Title: Hadoop Engineer/ODP Platform Location: Birmingham/Sheffield - Hybrid working with 3 days onsite per week End Date: 28/11/2025 Role Overview: We are seeking a highly skilled Hadoop Engineer to support and enhance our Operational Data Platform (ODP) deployed in an on-premises environment click apply for full job details More ❯