o Relational Databases such as SQL Server, Oracle DB, IBM DB o Non-Relational Databases such as MongoDB, Redis o Data Warehouse and Data Lake tools such as Snowflake, Hadoop o File servers, NAS, Isilon, Cloud Drives • Ability to work collaboratively with key stakeholders (Data & Analytics group, Data Strategy & Management team, Access, Network and Security Architects, various Security Engineers More ❯
often (in days) to receive an alert: Create Alert Supermicro is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are amongst the fastest growing company among the Silicon Valley Top 50 technology firms. Our unprecedented global More ❯
None Preferred education Bachelor's Degree Required technical and professional expertise Design, develop, and maintain Java-based applications for processing and analyzing large datasets, utilizing frameworks such as ApacheHadoop, Spark, and Kafka. Collaborate with cross-functional teams to define, design, and ship data-intensive features and services. Optimize existing data processing pipelines for efficiency, scalability, and reliability. Develop … s degree in Computer Science, Information Technology, or a related field, or equivalent experience. Experience in Big Data Java development. In-depth knowledge of Big Data frameworks, such as Hadoop, Spark, and Kafka, with a strong emphasis on Java development. Proficiency in data modeling, ETL processes, and data warehousing concepts. Experience with data processing languages like Scala, Python, or More ❯
the latest tools and technologies to design, develop, and implement solutions that transform businesses and drive innovation. What will your job look like 4+ years of relevant experience in Hadoop with Scala Development Its mandatory that the candidate should have handled more than 2 projects in the above framework using Scala. Should have 4+ years of relevant experience in … handling end to end Big Data technology. Meeting with the development team to assess the company's big data infrastructure. Designing and coding Hadoop applications to analyze data collections. Creating data processing frameworks. Extracting data and isolating data clusters. Testing scripts and analyzing results. Troubleshooting application bugs. Maintaining the security of company data. Training staff on application use Good … platform & data development roles 5+ years of experience in big data technology with experience ranging from platform architecture, data management, data architecture and application architecture High Proficiency working with Hadoop platform including Spark/Scala, Kafka, SparkSQL, HBase, Impala, Hive and HDFS in multi-tenant environments Solid base in data technologies like warehousing, ETL, MDM, DQ, BI and analytical More ❯
available now and seeking an exciting new role where they can take ownership and responsibility. The role involves designing, implementing, and managing Data and Data Analytics systems. Experience with Hadoop, Splunk, BI (Business Intelligence), NoSQL, Infrastructure, Architecture , and the design and implementation of previous projects is essential. We need someone who can hit the ground running . If you More ❯
display, video, mobile, programmatic, social, native), considering viewability, interaction, and engagement metrics. Create dashboards and deliver usable insights to help steer product roadmaps. Utilize tools such as SQL, R, Hadoop, Excel to hypothesize and perform statistical analysis, AB tests, and experiments to measure the impact of product initiatives on revenue, technical performance, advertiser & reader engagement. Candidates should have analysis More ❯
often (in days) to receive an alert: Create Alert Supermicro is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are amongst the fastest growing company among the Silicon Valley Top 50 technology firms. Our unprecedented global More ❯
often (in days) to receive an alert: Create Alert Supermicro is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are amongst the fastest growing company among the Silicon Valley Top 50 technology firms. Our unprecedented global More ❯
reconcile, and interrogate data. Provide actionable recommendations to improve reporting processes—e.g., enhancing data quality, streamlining workflows, and optimizing query performance. Contribute to architecture and design discussions in a Hadoop-based environment. Translate high-level architecture and requirements into detailed design and code. Lead and guide complex, high-impact projects across all stages of development and implementation while ensuring … coordinating deliverables, and ensuring timely, high-quality execution. Required Skills & Experience: Proficiency in SQL, Python, and Spark. Minimum 5 years of hands-on technical data analysis experience. Familiarity with Hadoop/Big Data environments. Understanding of Data Warehouse/ETL design and development methodologies. Ability to perform under pressure and adapt to changing priorities or requirements. Strong communication skills More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
Hays
optimize PySpark and SQL queries to analyze, reconcile, and interrogate large datasets. Recommend improvements to reporting processes, data quality, and query performance. Contribute to the architecture and design of Hadoop environments. Translate architecture and requirements into scalable, production-ready code. Provide technical leadership and direction on complex, high-impact projects. Act as a subject matter expert (SME) to senior … to Hive, Impala, and Spark ecosystem technologies (e.g. HDFS, Apache Spark, Spark-SQL, UDF, Sqoop). Experience building and optimizing Big Data pipelines, architectures, and data sets. Familiarity with Hadoop and Big Data ecosystems. Strong knowledge of Data Warehouse and ETL design and development methodologies. Ability to work under pressure and adapt to changing requirements. Excellent verbal and written More ❯
Role Title: Hadoop Engineer/ODP Platform Location: Birmingham/Sheffield - Hybrid working with 3 days onsite per week End Date: 28/11/2025 Role Overview: We are seeking a highly skilled Hadoop Engineer to support and enhance our Operational Data Platform (ODP) deployed in an on-premises environment. The ideal candidate will have extensive experience … in the Hadoop ecosystem, strong programming skills, and a solid understanding of infrastructure-level data analytics. This role focuses on building and maintaining scalable, secure, and high-performance data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache … and troubleshoot data jobs, ensuring reliability and performance across the platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure More ❯
West Midlands, United Kingdom Hybrid / WFH Options
Experis
Role Title: Hadoop Engineer/ODP Platform Location: Birmingham/Sheffield - Hybrid working with 3 days onsite per week End Date: 28/11/2025 Role Overview: We are seeking a highly skilled Hadoop Engineer to support and enhance our Operational Data Platform (ODP) deployed in an on-premises environment. The ideal candidate will have extensive experience … in the Hadoop ecosystem, strong programming skills, and a solid understanding of infrastructure-level data analytics. This role focuses on building and maintaining scalable, secure, and high-performance data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache … and troubleshoot data jobs, ensuring reliability and performance across the platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure More ❯
based insights, collaborating closely with stakeholders. Passionately discover hidden solutions in large datasets to enhance business outcomes. Design, develop, and maintain data processing pipelines using Cloudera technologies, including ApacheHadoop, Apache Spark, Apache Hive, and Python. Collaborate with data engineers and scientists to translate data requirements into technical specifications. Develop and maintain frameworks for efficient data extraction, transformation, and … and verbal communication skills for effective team collaboration. Eagerness to learn and master new technologies and techniques. Experience with AutoSys is preferred. Experience with distributed data/computing tools: Hadoop, Hive, MySQL, etc. If you're a passionate Cloudera Developer eager to make a difference in the banking industry, we want to hear from you! Apply now to join More ❯
About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more. More ❯
possible. Join us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. YOUR ROLE Capgemini is looking for Hadoop Data Engineer.A Hadoop Data Engineer in the Financial Services (FS) sector is needed. This role focuses on building and maintaining scalable data systems for financial data analysis and … reporting, often involving expertise in Hadoop, Spark, and related technologies YOUR PROFILE Expertise on Hadoop, Spark & Scala Experience in developing complex data transformation workflows(ETL) using Big Data Technologies Good expertise on HIVE, Impala, HBase Hands on experience to finetune Spark jobs Experience with Java and distributed computing ABOUT CAPGEMINI Capgemini is a global business and technology transformation More ❯
to technical requirements and implementation. Experience of Big Data technologies/Big Data Analytics. C++, Java, Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for More ❯
Linux, GitHub, Continuous Integration, Cloud technologies, Virtualisation Tools, Monitoring utilities, Disaster recovery process/tools Experience in troubleshooting and problem resolution Experience in System Integration Knowledge of the following: Hadoop, Flume, Sqoop, Map Reduce, Hive/Impala, Hbase, Kafka, Spark Streaming Experience of ETL tools incorporating Big Data Shell Scripting, Python Beneficial Skills: Understanding of: LAN, WAN, VPN and … SD Networks Hardware and Cabling set-up experience Experience of implementing and supporting Big Data analytics platforms built on top of Hadoop Knowledge and appreciation of Information security If you are looking for a challenging role in an exciting environment, then please do not hesitate to apply More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Randstad Technologies
Advert Hadoop Engineer 6 Months Contract Remote working £300 to £350 a day A top timer global consultancy firm is looking for an experienced Hadoop Engineer to join their team and contribute to large big data projects. The position requires a professional with a strong background in developing and managing scalable data pipelines, specifically using the Hadoop ecosystem and related tools. The role will focus on designing, building and maintaining scalable data pipelines using big data hadoop ecosystems and apache spark for large datasets. A key responsibility is to analyse infrastructure logs and operational data to derive insights, demonstrating a strong understanding of both data processing and the underlying systems. The successful candidate should have … for Scripting Apache Spark Prior experience of building ETL pipelines Data Modelling 6 Months Contract - Remote Working - £300 to £350 a day Inside IR35 If you are an experienced Hadoop engineer looking for a new role then this is the perfect opportunity for you. If the above seems of interest to you then please apply directly to the AD More ❯
data processing and predictive analytics. Responsibilities Develop and implement machine learning models using Spark ML for predictive analytics. Design and optimize training and inference pipelines for distributed systems (e.g., Hadoop). Process and analyze large-scale datasets to extract meaningful insights and features. Collaborate with data engineers to ensure seamless integration of ML workflows with data pipelines. Evaluate model … technologies. Requirements: Proficiency in Apache Spark and Spark MLlib for machine learning tasks. Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering). Experience with distributed systems like Hadoop for data storage and processing. Proficiency in Python, Scala, or Java for ML development. Familiarity with data preprocessing techniques and feature engineering. Knowledge of model evaluation metrics and techniques. More ❯
data processing and predictive analytics. Role: Develop and implement machine learning models using Spark ML for predictive analytics Design and optimise training and inference pipelines for distributed systems (e.g., Hadoop) Process and analyse large-scale datasets to extract meaningful insights and features Collaborate with data engineers to ensure seamless integration of ML workflows with data pipelines Evaluate model performance … computing technologies Experience: Proficiency in Apache Spark and Spark MLlib for machine learning tasks Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering) Experience with distributed systems like Hadoop for data storage and processing Proficiency in Python, Scala, or Java for ML development Familiarity with data preprocessing techniques and feature engineering Knowledge of model evaluation metrics and techniques More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
McGregor Boyall
work across regulatory and transformation initiatives that span multiple trading desks, functions, and stakeholders. You'll build PySpark and SQL queries to interrogate, reconcile and analyse data, contribute to Hadoop data architecture discussions, and help improve reporting processes and data quality. You'll be hands-on across technical delivery, documentation, testing, and stakeholder engagement. It's a technically rich … high-impact project work at one of the world's most complex financial institutions. Key Skills: Strong hands-on experience with SQL, Python, Spark Background in Big Data/Hadoop environments Solid understanding of ETL/Data Warehousing concepts Strong communicator, with the ability to explain technical concepts to senior stakeholders Details: Location: Belfast - 3 days/week onsite More ❯
Growth Revenue Management, Marketing Analytics, CLM/CRM Analytics and/or Risk Analytics. Conduct analyses in typical analytical tools ranging from SAS, SPSS, Eviews, R, Python, SQL, Teradata, Hadoop, Access, Excel, etc. Communicate analyses via compelling presentations. Solve problems, disaggregate issues, develop hypotheses and develop actionable recommendations from data and analysis. Prepare and facilitating workshops. Manage stakeholders and … An ability to think analytically, decompose problem sets, develop hypotheses and recommendations from data analysis. Strong technical skills regarding data analysis, statistics, and programming. Strong working knowledge of, Python, Hadoop, SQL, and/or R. Working knowledge of Python data tools (e.g. Jupyter, Pandas, Scikit-Learn, Matplotlib). Ability to talk the language of statistics, finance, and economics a More ❯
Hadoop SQL Developer 6 months Hybrid/Northampton - 3 days a week on site £400-460 per day - Umbrella only Come and join a global leader in technology, consulting, and innovation, with over a century of impact in shaping how the world works. We continue to redefine what's possible with cutting-edge solutions in artificial intelligence, hybrid cloud … experienced talent, and career switchers, we empower individuals to grow, innovate, and lead in their fields. Our client is a financial services giant; they are looking for an experienced Hadoop SQL Developer to join them on an initial 6-month contract. Key Skills Java, Scala, Spark, Hadoop & Big Data, UI skills like Angular Reporting skills such as Tableau More ❯
users or large data sets with 10M+ database records. This is a very Big Data platform. Experience building REST services (orchestration layer) on CRUD data services based on Cloudera Hadoop stack, with an emphasis on performance optimization. Understanding how to secure data in a REST architecture. Knowledge of scaling web applications, including load balancing, caching, indexing, normalization, etc. Proficiency … in Java/Spring web application development. Experience with Test Driven Development and Agile methodologies; Behavior Driven Development is a plus. Knowledge of Hadoop, Big Data, Hive, Pig, NoSQL is a plus, though most engineers with this background may have limited REST experience. Additional Information All your information will be kept confidential according to EEO guidelines. Direct Staffing Inc More ❯
/MOD or Enhanced DV Clearance. WE NEED THE PYTHON/DATA ENGINEER TO HAVE. Current DV Security Clearance (Standard or Enhanced) Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Python/PySpark experience Experience With Palantir Foundry is nice to have Experience working in an Agile Scrum environment with tools such as Confluence/Jira …/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH/LEAD BIG DATA ENGINEER/LEAD BIG DATA DEVELOPER More ❯