the latest tools and technologies to design, develop, and implement solutions that transform businesses and drive innovation. What will your job look like 4+ years of relevant experience in Hadoop with Scala Development Its mandatory that the candidate should have handled more than 2 projects in the above framework using Scala. Should have 4+ years of relevant experience in … handling end to end Big Data technology. Meeting with the development team to assess the company's big data infrastructure. Designing and coding Hadoop applications to analyze data collections. Creating data processing frameworks. Extracting data and isolating data clusters. Testing scripts and analyzing results. Troubleshooting application bugs. Maintaining the security of company data. Training staff on application use Good … platform & data development roles 5+ years of experience in big data technology with experience ranging from platform architecture, data management, data architecture and application architecture High Proficiency working with Hadoop platform including Spark/Scala, Kafka, SparkSQL, HBase, Impala, Hive and HDFS in multi-tenant environments Solid base in data technologies like warehousing, ETL, MDM, DQ, BI and analytical More ❯
enhances complex and diverse Big-Data Cloud systems based upon documented requirements. Directly contributes to all stages of back-end processing, analyzing, and indexing. Provides expertise in Cloud Computing, Hadoop Eco-System including implementing Java applications, Distributed Computing, Information Retrieval (IR), and Object Oriented Design. Works individually or as part of a team. Reviews and tests software components for … substituted for a bachelors degree. Master in Computer Science or related discipline from an accredited college or university may be substituted for two (2) years of experience. Cloudera Certified Hadoop Developer certification may be substituted for one (1) year of Cloud experience. 2. The following Cloud related experiences are required: 3. a. Two (2) years of Cloud and/ More ❯
reconcile, and interrogate data. Provide actionable recommendations to improve reporting processes—e.g., enhancing data quality, streamlining workflows, and optimizing query performance. Contribute to architecture and design discussions in a Hadoop-based environment. Translate high-level architecture and requirements into detailed design and code. Lead and guide complex, high-impact projects across all stages of development and implementation while ensuring … coordinating deliverables, and ensuring timely, high-quality execution. Required Skills & Experience: Proficiency in SQL, Python, and Spark. Minimum 5 years of hands-on technical data analysis experience. Familiarity with Hadoop/Big Data environments. Understanding of Data Warehouse/ETL design and development methodologies. Ability to perform under pressure and adapt to changing priorities or requirements. Strong communication skills More ❯
reconcile, and interrogate data. Provide actionable recommendations to improve reporting processes—e.g., enhancing data quality, streamlining workflows, and optimizing query performance. Contribute to architecture and design discussions in a Hadoop-based environment. Translate high-level architecture and requirements into detailed design and code. Lead and guide complex, high-impact projects across all stages of development and implementation while ensuring … coordinating deliverables, and ensuring timely, high-quality execution. Required Skills & Experience: Proficiency in SQL, Python, and Spark. Minimum 5 years of hands-on technical data analysis experience. Familiarity with Hadoop/Big Data environments. Understanding of Data Warehouse/ETL design and development methodologies. Ability to perform under pressure and adapt to changing priorities or requirements. Strong communication skills More ❯
reconcile, and interrogate data. Provide actionable recommendations to improve reporting processes—e.g., enhancing data quality, streamlining workflows, and optimizing query performance. Contribute to architecture and design discussions in a Hadoop-based environment. Translate high-level architecture and requirements into detailed design and code. Lead and guide complex, high-impact projects across all stages of development and implementation while ensuring … coordinating deliverables, and ensuring timely, high-quality execution. Required Skills & Experience: Proficiency in SQL, Python, and Spark. Minimum 5 years of hands-on technical data analysis experience. Familiarity with Hadoop/Big Data environments. Understanding of Data Warehouse/ETL design and development methodologies. Ability to perform under pressure and adapt to changing priorities or requirements. Strong communication skills More ❯
Role Title: Hadoop Engineer/ODP Platform Location: Birmingham/Sheffield - Hybrid working with 3 days onsite per week End Date: 28/11/2025 Role Overview: We are seeking a highly skilled Hadoop Engineer to support and enhance our Operational Data Platform (ODP) deployed in an on-premises environment. The ideal candidate will have extensive experience … in the Hadoop ecosystem, strong programming skills, and a solid understanding of infrastructure-level data analytics. This role focuses on building and maintaining scalable, secure, and high-performance data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache … and troubleshoot data jobs, ensuring reliability and performance across the platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure More ❯
Role Title: Hadoop Engineer/ODP Platform Location: Birmingham/Sheffield - Hybrid working with 3 days onsite per week End Date: 28/11/2025 Role Overview: We are seeking a highly skilled Hadoop Engineer to support and enhance our Operational Data Platform (ODP) deployed in an on-premises environment. The ideal candidate will have extensive experience … in the Hadoop ecosystem, strong programming skills, and a solid understanding of infrastructure-level data analytics. This role focuses on building and maintaining scalable, secure, and high-performance data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache … and troubleshoot data jobs, ensuring reliability and performance across the platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure More ❯
West Midlands, United Kingdom Hybrid / WFH Options
Experis
Role Title: Hadoop Engineer/ODP Platform Location: Birmingham/Sheffield - Hybrid working with 3 days onsite per week End Date: 28/11/2025 Role Overview: We are seeking a highly skilled Hadoop Engineer to support and enhance our Operational Data Platform (ODP) deployed in an on-premises environment. The ideal candidate will have extensive experience … in the Hadoop ecosystem, strong programming skills, and a solid understanding of infrastructure-level data analytics. This role focuses on building and maintaining scalable, secure, and high-performance data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache … and troubleshoot data jobs, ensuring reliability and performance across the platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure More ❯
Enterprise class arrays. • Solid State Disk (SSD). • NFS/CIFS based server/storage appliance. • HPSE. • Data Domain and similar deduplication products. • Cloud based storage solutions such as HADOOP, and IBM BigInsights. • Trouble ticket management utilizing Remedy. Requirements IAT Level II Certification Required EQUAL OPPORTUNITY EMPLOYER VETERANS DISABLED More ❯
technologie uittesten, Als je dat graag doet, voel je je goed in de job. Je hebt een bachelor of masterdiploma IT en ziet het volledig zitten om Cloud, Mainframe, Hadoop, Kubernetes en Docker infrastructuur beter te leren kennen. Je spreekt en schrijft vlot Engels . Aanbod Bij KBC IT ben je zeker van een uitstekende onboarding, collegialiteit en een More ❯
based insights, collaborating closely with stakeholders. Passionately discover hidden solutions in large datasets to enhance business outcomes. Design, develop, and maintain data processing pipelines using Cloudera technologies, including ApacheHadoop, Apache Spark, Apache Hive, and Python. Collaborate with data engineers and scientists to translate data requirements into technical specifications. Develop and maintain frameworks for efficient data extraction, transformation, and … and verbal communication skills for effective team collaboration. Eagerness to learn and master new technologies and techniques. Experience with AutoSys is preferred. Experience with distributed data/computing tools: Hadoop, Hive, MySQL, etc. If you're a passionate Cloudera Developer eager to make a difference in the banking industry, we want to hear from you! Apply now to join More ❯
Jenkins, Ansible, Docker, Kubernetes, etc. Desired Experience Knowledge with the following Big Data technologies: Data Ingest (JSON, Kafka, Microservices, Elastic Search), Analytics (HIVE, SPARK, R, PIG, OOZIE workflows), Elasticsearch, Hadoop (HIVE data, OOZIE, Spark, PIG, IMPALA, HUE), COTS Integration (Knowi, MongoDB, Oracle, MySQL RDS, Elastic, Logstash, Kibana, Zookeeper, Consul, HADOOP/HDFS), Docker and Chef More ❯
About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more. More ❯
Data Platforms Team . This is a fantastic opportunity to work on large-scale distributed data systems that drive business-critical operations. About the Team and the Role The Hadoop Team is central to the company's big data infrastructure. They deliver scalable, robust solutions for data processing and storage, aligned with strategic goals to drive data-informed decision … making and improve user experience. The team is responsible for optimizing Hadoop-based platforms that underpin key innovation initiatives across the organization. Key Responsibilities System Optimization & Scalability: Lead the enhancement of Hadoop-based systems to ensure high availability, fault tolerance, and scalability. Your work will be essential in maintaining reliable performance and preparing the platform for future growth. … customer-facing tools to streamline data access, management, and user interaction. Enhance the operational efficiency and experience for both internal and external stakeholders. System Integration: Ensure seamless integration of Hadoop with other platforms and tools. Enable cross-functional teams to effectively leverage data with reduced manual intervention, driving strategic decision-making. Required Skills & Experience Bachelor's degree in Computer More ❯
to technical requirements and implementation. Experience of Big Data technologies/Big Data Analytics. C++, Java, Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for More ❯
Linux, GitHub, Continuous Integration, Cloud technologies, Virtualisation Tools, Monitoring utilities, Disaster recovery process/tools Experience in troubleshooting and problem resolution Experience in System Integration Knowledge of the following: Hadoop, Flume, Sqoop, Map Reduce, Hive/Impala, Hbase, Kafka, Spark Streaming Experience of ETL tools incorporating Big Data Shell Scripting, Python Beneficial Skills: Understanding of: LAN, WAN, VPN and … SD Networks Hardware and Cabling set-up experience Experience of implementing and supporting Big Data analytics platforms built on top of Hadoop Knowledge and appreciation of Information security If you are looking for a challenging role in an exciting environment, then please do not hesitate to apply More ❯
External Description Reach beyond with Liberty IT; for this is where you'll find the super challenges, where you'll be given the scope and the support to go further, dig deeper and fly higher. We won't stand over More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Randstad Technologies
Advert Hadoop Engineer 6 Months Contract Remote working £300 to £350 a day A top timer global consultancy firm is looking for an experienced Hadoop Engineer to join their team and contribute to large big data projects. The position requires a professional with a strong background in developing and managing scalable data pipelines, specifically using the Hadoop ecosystem and related tools. The role will focus on designing, building and maintaining scalable data pipelines using big data hadoop ecosystems and apache spark for large datasets. A key responsibility is to analyse infrastructure logs and operational data to derive insights, demonstrating a strong understanding of both data processing and the underlying systems. The successful candidate should have … for Scripting Apache Spark Prior experience of building ETL pipelines Data Modelling 6 Months Contract - Remote Working - £300 to £350 a day Inside IR35 If you are an experienced Hadoop engineer looking for a new role then this is the perfect opportunity for you. If the above seems of interest to you then please apply directly to the AD More ❯
scalable Big Data Store (NoSQL) such as Hbase, CloudBase/Acumulo, Big Table, etc.; Shall have demonstrated work experience with the Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc.; Shall have demonstrated work experience with the Hadoop Distributed File System (HDFS); Shall have demonstrated work experience with Serialization such as JSON and/or More ❯
users or large data sets with 10M+ database records. This is a very Big Data platform. Experience building REST services (orchestration layer) on CRUD data services based on Cloudera Hadoop stack, with an emphasis on performance optimization. Understanding how to secure data in a REST architecture. Knowledge of scaling web applications, including load balancing, caching, indexing, normalization, etc. Proficiency … in Java/Spring web application development. Experience with Test Driven Development and Agile methodologies; Behavior Driven Development is a plus. Knowledge of Hadoop, Big Data, Hive, Pig, NoSQL is a plus, though most engineers with this background may have limited REST experience. Additional Information All your information will be kept confidential according to EEO guidelines. Direct Staffing Inc More ❯
Growth Revenue Management, Marketing Analytics, CLM/CRM Analytics and/or Risk Analytics. Conduct analyses in typical analytical tools ranging from SAS, SPSS, Eviews, R, Python, SQL, Teradata, Hadoop, Access, Excel, etc. Communicate analyses via compelling presentations. Solve problems, disaggregate issues, develop hypotheses and develop actionable recommendations from data and analysis. Prepare and facilitating workshops. Manage stakeholders and … An ability to think analytically, decompose problem sets, develop hypotheses and recommendations from data analysis. Strong technical skills regarding data analysis, statistics, and programming. Strong working knowledge of, Python, Hadoop, SQL, and/or R. Working knowledge of Python data tools (e.g. Jupyter, Pandas, Scikit-Learn, Matplotlib). Ability to talk the language of statistics, finance, and economics a More ❯
models in production and adjusting model thresholds to improve performance Experience designing, running, and analyzing complex experiments or leveraging causal inference designs Experience with distributed tools such as Spark, Hadoop, etc. A PhD or MS in a quantitative field (e.g., Statistics, Engineering, Mathematics, Economics, Quantitative Finance, Sciences, Operations Research) Hybrid work at Stripe Office-assigned Stripes spend at least … models in production and adjusting model thresholds to improve performance Experience designing, running, and analyzing complex experiments or leveraging causal inference designs Experience with distributed tools such as Spark, Hadoop, etc. A PhD or MS in a quantitative field (e.g., Statistics, Engineering, Mathematics, Economics, Quantitative Finance, Sciences, Operations Research) Hybrid work at Stripe Office-assigned Stripes spend at least More ❯
The Red Gate Group is seeking a Data Engineer to support the Defense Threat Reduction Agency (DTRA) in Reston, VA. In this role, you will transform complex, data-rich environments into actionable intelligence that directly impacts national security. You'll More ❯
Python, Java, AWS Infrastructure, Linux, Kubernetes, Hadoop, CI/CD , Big Data Platform, Agile, JIRA, Confluence, Github, Gitlab, puppet, ansible, maven, virtualization, ovirt, proxmox, vmware, Shell/Bash scripting Due to federal contract requirements, United States citizenship and an active TS/SCI security clearance and polygraph are required for the position. Required: Must be a US Citizen. Must … or related discipline from an accredited college or university. Prior experience or familiarity with DISA's Big Data Platform or other Big Data systems (e.g. Cloudera's Distribution of Hadoop, Hortonworks Data Platform, MapR, etc ) is a plus. Experience with CI/CD pipelines (e.g. Gitlab-CI, Travis-CI, etc.). Understanding of agile software development methodologies and use More ❯
demonstrated work experience with: o Distributed scalable Big Data Store (NoSQL) such as Hbase, CloudBase/Acumulo, Big Table, etc. o Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc. o Hadoop Distributed File System (HDFS) o Serialization such as JSON and/or BSON • 4 years of SWE experience may be substituted for a More ❯