data applications using the latest open-source technologies. Desired working in offshore model and Managed outcome Develop logical and physical data models for big data platforms. Automate workflows using Apache Airflow. Create data pipelines using ApacheHive, Apache Spark, Apache Kafka. Provide ongoing maintenance and enhancements to existing systems and participate in rotational on-call … experience 10+ years of hands-on experience with developing data warehouse solutions and data products. 6+ years of hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required 5+ years of hands-on experience in modelling and designing schema for data lakes or for RDBMS platforms. Experience More ❯
Northampton, Northamptonshire, East Midlands, United Kingdom
Experis
Conduct>It, Express>It, Metadata Hub, and PDL. Hands-on experience with SQL , Unix/Linux shell scripting , and data warehouse concepts . Familiarity with big data ecosystems (Hadoop, Hive, Spark) and cloud platforms (AWS, Azure, GCP) is a plus. Proven ability to troubleshoot complex ETL jobs and resolve performance issues. Experience working with large-scale datasets and enterprise More ❯
Conduct>It, Express>It, Metadata Hub, and PDL. Hands-on experience with SQL , Unix/Linux shell scripting , and data warehouse concepts . Familiarity with big data ecosystems (Hadoop, Hive, Spark) and cloud platforms (AWS, Azure, GCP) is a plus. Proven ability to troubleshoot complex ETL jobs and resolve performance issues. Experience working with large-scale datasets and enterprise More ❯
Conduct>It, Express>It, Metadata Hub, and PDL. Hands-on experience with SQL , Unix/Linux shell scripting , and data warehouse concepts . Familiarity with big data ecosystems (Hadoop, Hive, Spark) and cloud platforms (AWS, Azure, GCP) is a plus. Proven ability to troubleshoot complex ETL jobs and resolve performance issues. Experience working with large-scale datasets and enterprise More ❯
do in person with client) Duration: 12 Months Required Skills:Programming Languages: Strong proficiency in Python, Java, and SQL. Big Data Frameworks: Deep understanding of Hadoop ecosystem (HDFS, MapReduce, Hive, Spark Cloud Data Warehousing: Expertise in Snowflake architecture, data manipulation, and query optimization. Data Engineering Concepts: Knowledge of data ingestion, transformation, data quality checks, and data security practices. Data More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Lorien
capacity. Provide Environment Management representation in daily scrums, working groups, and ad-hoc meetings. Required Skillsets: Strong skills and experience with data technologies such as IBM DB2, Oracle, MongoDB, Hive, Hadoop, SQL, Informatica, and similar tech stacks. Attention to detail and strong ability to work independently and navigate complex target end state architecture (Tessa). Strong knowledge and experience More ❯
Title: GCP -Data Engineer Location: Philadelphia PA (Can submit who are willing to relocate) GCP Data Engineer - GCP Dataflow and Apache Beam (Key skills) Primary Skills- PySpark, Spark, Python, Big Data, GCP, Apache Beam, Dataflow, Airflow, Kafka and BigQuery GFO, Google Analytics Javascript is Must Strong Experience with Dataflow and BigQuery A person should have leading the team … Platforms (preferably GCP) provided Big Data technologies Hands-on experience with real-time streaming processing as well as high volume batch processing, and skilled in Advanced SQL, GCP BigQuery, Apache Kafka, Data-Lakes, etc. Hands-on Experience in Big Data technologies - Hadoop, Hive, and Spark, and an enterprise-scale Customer Data Platform (CDP) Experience in at least one More ❯
UI development such as React, Node.js Experience in other languages such as Python Experience in Machine Learning, Information Retrieval, Recommendation Systems, as well as BigData (Hadoop/Spark/Hive) Experience in using ML learning software and libraries (R/Python) Experience in building a live e-commerce product that has scaled to large number of users #LI-Hybrid More ❯
converting research studies into tangible real-world changes Knowledge of AWS platforms such as S3, Glue, Athena, Sagemaker Experience with big data technologies such as AWS, Hadoop, Spark, Pig, Hive etc. PhD in Industrial/Organizational Psychology or related field Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. More ❯
Oak Brook, Illinois, United States Hybrid / WFH Options
Ace Hardware Corporation
distribution), including performing backup and restore operations and supporting development, test, and production systems. Key Responsibilities Cloudera Hadoop Administration Manage and support Cloudera Hadoop clusters and services (HDFS, YARN, Hive, Impala, Spark, Oozie, etc.). Perform cluster upgrades, patching, performance tuning, capacity planning, and health monitoring. Secure the Hadoop platform using Kerberos, Ranger, or Sentry. Develop and maintain automation … of Spark and Delta Lake architecture. Experience with IAM, Active Directory, and SSO integration. Familiarity with DevOps and CI/CD for data platforms. Deep understanding of Hadoop ecosystem: Hive, Impala, Spark, HDFS, YARN. Experience integrating data from DB2 to Hadoop/Databricks using tools like Sqoop or custom connectors. Scripting skills in Shell and/or Python for More ❯
also have Systems integration background or experience Experience of developing the Finance Data Strategy for large financial institutions, developing future state architecture Delivery experience in Big Data technologies and Apache ecosystem technologies such as Spark, Kafka, Hive etc and have experience building end to end data pipelines using on-premise or cloud-based data platforms. Hands-on experience More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Expedia Group
topics like Regression, Nave Bayes, Deep Learning, Gradient Boosting, Random Forests, SVMs, Neural Networks Helpful to have experience with programming, statistical, and querying languages like Python, R, SQL/Hive, Java? Helpful to have understanding of distributed file systems, scalable datastores, distributed computing and related technologies (Spark, Hadoop, etc.); implementation experience of MapReduce techniques, in-memory data processing, etc. … computing context Helpful to be able to effectively communicate and engage with a variety of partners (e.g., internal, external, technical, non-technical people) Helpful to have Java, R, C++, Hive, Hadoop, Microsoft SQL Server?knowledge What We Offer: Successful candidates will receive a competitive compensation package including the benefits below and others: 50,000 GBP pro-rata Hybrid Work More ❯
london, south east england, united kingdom Hybrid / WFH Options
Expedia Group
topics like Regression, Nave Bayes, Deep Learning, Gradient Boosting, Random Forests, SVMs, Neural Networks Helpful to have experience with programming, statistical, and querying languages like Python, R, SQL/Hive, Java? Helpful to have understanding of distributed file systems, scalable datastores, distributed computing and related technologies (Spark, Hadoop, etc.); implementation experience of MapReduce techniques, in-memory data processing, etc. … computing context Helpful to be able to effectively communicate and engage with a variety of partners (e.g., internal, external, technical, non-technical people) Helpful to have Java, R, C++, Hive, Hadoop, Microsoft SQL Server?knowledge What We Offer: Successful candidates will receive a competitive compensation package including the benefits below and others: 50,000 GBP pro-rata Hybrid Work More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Expedia Group
topics like Regression, Nave Bayes, Deep Learning, Gradient Boosting, Random Forests, SVMs, Neural Networks Helpful to have experience with programming, statistical, and querying languages like Python, R, SQL/Hive, Java? Helpful to have understanding of distributed file systems, scalable datastores, distributed computing and related technologies (Spark, Hadoop, etc.); implementation experience of MapReduce techniques, in-memory data processing, etc. … computing context Helpful to be able to effectively communicate and engage with a variety of partners (e.g., internal, external, technical, non-technical people) Helpful to have Java, R, C++, Hive, Hadoop, Microsoft SQL Server?knowledge What We Offer: Successful candidates will receive a competitive compensation package including the benefits below and others: 50,000 GBP pro-rata Hybrid Work More ❯
Role: Senior AI/ML Scientist – Personalization & GenAI (Dubai based) Join a high-performing Data Science team whose mission is to drive competitive value through scalable AI solutions. The team builds models that enhance user experiences, enable better decision-making More ❯
Role: Senior AI/ML Scientist – Personalization & GenAI (Dubai based) Join a high-performing Data Science team whose mission is to drive competitive value through scalable AI solutions. The team builds models that enhance user experiences, enable better decision-making More ❯
Role: Senior AI/ML Scientist – Personalization & GenAI (Dubai based) Join a high-performing Data Science team whose mission is to drive competitive value through scalable AI solutions. The team builds models that enhance user experiences, enable better decision-making More ❯
Role: Senior AI/ML Scientist – Personalization & GenAI (Dubai based) Join a high-performing Data Science team whose mission is to drive competitive value through scalable AI solutions. The team builds models that enhance user experiences, enable better decision-making More ❯
london (city of london), south east england, united kingdom
oryxsearch.io
Role: Senior AI/ML Scientist – Personalization & GenAI (Dubai based) Join a high-performing Data Science team whose mission is to drive competitive value through scalable AI solutions. The team builds models that enhance user experiences, enable better decision-making More ❯