Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for analytical purposes An MSc or PhD in Data Science or an analytical subject (Physics, Mathematics More ❯
end buildout from scratch by coordinating across multiple business and technology groups o Experience building complex single-page applications using Abinitio/Hadoop/Hive/Kafka/Oracle and modern MOM technologies o Experienced with Linux/Unix platform o Experience in SCMs like GIT; and tools like More ❯
or SaaS products and a good understanding of Digital Marketing and Marketing Technologies. Have experience working with Big Data technologies (such as Hadoop, MapReduce, Hive/Pig, Cassandra, MongoDB, etc) An understanding of web technologies such as Javascript, node.js and html. Some level of understanding or experience in AI More ❯
proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow Proficiency in Python and at least one More ❯
proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow Proficiency in Python and at least one More ❯
Centre of Excellence. Skills, knowledge and expertise: Deep expertise in the Databricks platform, including Jobs and Workflows, Cluster Management, Catalog Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts. Proven experience working with big data technologies, i.e., Databricks and Apache Spark. Proven experience More ❯
Centre of Excellence. Skills, knowledge and expertise: Deep expertise in the Databricks platform, including Jobs and Workflows, Cluster Management, Catalog Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts. Proven experience working with big data technologies, i.e., Databricks and Apache Spark. Proven experience More ❯
or Engineering - Strong experience with Python and R - A strong understanding of a number of the tools across the Hadoop ecosystem such as Spark, Hive, Impala & Pig - An expertise in at least one specific data science area such as text mining, recommender systems, pattern recognition or regression models - Previous More ❯
Data Scientist - skills in statistics, physics, mathematics, Computer Science, Engineering, Data Mining, Big Data (Hadoop, Hive, MapReduce) This is an exceptional opportunity to work as a Data Scientist within a global analytics team, utilizing various big data technologies to develop complex behavioral models, analyze customer uptake of products, and More ❯
development and deployment. Experience with open-source resources in government environments. Familiarity with GIS technologies, ICD 503, and big data tools like Hadoop, Spark, Hive, ElasticSearch. Knowledge of hybrid cloud/on-prem architectures, AWS, C2S, OpenStack. Certifications like Security+ or similar. Experience with military or intelligence systems is More ❯
methods using parallel computing frameworks (e.g., Deeplearning4j, Torch, TensorFlow, Caffe, Neon, NVIDIA cuDNN, OpenCV) and distributed data processing frameworks (e.g., Hadoop including HDFS, HBase, Hive, Impala, Giraph, Sqoop; Spark including MLib, GraphX, SQL, DataFrames). Proficient in programming/scripting languages such as Python, Java, Scala, and R (statistics More ❯
testing, and operations experience Bachelor's degree in computer science or equivalent 2+ years of big data technologies such as AWS, Hadoop, Spark, Pig, Hive, Lucene/SOLR, or Storm/Samza experience Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have More ❯
data science, machine learning algorithms, natural language processing, computer vision. - Experience designing and implementing information retrieval and web mining systems. Experience with MapReduce, Spark, Hive and Scala. - Knowledge of Linux/Unix and scripting on Perl/Ruby/Python. Amazon is committed to a diverse and inclusive workplace. More ❯
actively contribute throughout the Agile development lifecycle , participating in planning, refinement, and review ceremonies. Key Responsibilities: Develop and maintain ETL pipelines in Databricks , leveraging Apache Spark and Delta Lake . Design, implement, and optimize data transformations and treatments for structured and unstructured data. Work with Hive Metastore and … technical impact assessments and rationales. Work within GitLab repository structures and adhere to project-specific processes. Required Skills and Experience: Strong expertise in Databricks , Apache Spark , and Delta Lake . Experience with Hive Metastore and Unity Catalog for data governance. Proficiency in Python, SQL, Scala , or other relevant More ❯
Experience with common data science toolkits, such as Python - Proficiency in using query languages such as SQL on a big data platform e.g. Hadoop, Hive - Good applied statistics skills, such as distributions, statistical testing, regression, etc. - Good scripting and programming skills It would be desirable for the successful candidate More ❯
London, England, United Kingdom Hybrid / WFH Options
PURVIEW
to work methodically with a high level of attention to detail Experience working with SQL or any big data technologies is a plus (Hadoop, Hive, Hbase, Scala, Spark etc) Good team player with a strong team ethos. #J-18808-Ljbffr More ❯
comfortable to work with new clients Job Responsibilities: Highly experienced developing with Scala/Spark Experience of Java and multithreading Experience of Hadoop (HDFS, HIVE, IMPALA & HBASE) Experience of ETL/Data Engineering What We Offer Why work at GlobalLogic Our goal is to build an inclusive positive culture More ❯
analytic tools like R & Python; & visualization tools like Tableau & Power BI. Exposure to cloud platforms and big data systems such as Hadoop HDFS, and Hive is a plus. Ability to work with IT and Data Engineering teams to help embed analytic outputs in business processes. Graduate in Business Analytics More ❯
mission critical data pipelines and ETL systems. 5+ years of hands-on experience with big data technology, systems and tools such as AWS, Hadoop, Hive, and Snowflake Expertise with common Software Engineering languages such as Python, Scala, Java, SQL and a proven ability to learn new programming languages Experience … visualizations skills to convey information and results clearly Experience with DevOps tools such as Docker, Kubernetes, Jenkins, etc. Experience with event messaging frameworks like Apache Kafka The hiring range for this position in Santa Monica, California is $136,038 to $182,490 per year, in Glendale, California is More ❯
London, England, United Kingdom Hybrid / WFH Options
Deutsche Bank
Join to apply for the Lead Apache Hadoop Engineer role at Deutsche Bank 1 week ago Be among the first 25 applicants Join to apply for the Lead Apache Hadoop Engineer role at Deutsche Bank Corporate Title Vice President Technology serves as the foundation of our entire organization. … closely with clients while being part of a larger, creative, and innovative team dedicated to making a significant impact. Position Overview Job Title Lead Apache Hadoop Engineer Location London Corporate Title Vice President Technology serves as the foundation of our entire organization. Our Technology, Data, and Innovation (TDI) strategy … programme + 2 days’ volunteering leave per year Your Key Responsibilities Develop robust architectures and designs for big data platform and applications within the Apache Hadoop ecosystem Implement and deploy big data platform and solutions on-premises and in hybrid cloud environments. Read, understand, and modify open-source code More ❯
Required Qualifications SQL (BigQuery and PostgreSQL) proficiency and Python programming skills Experience with Google Cloud Platform Experience with big data warehouse systems (Google BigQuery, ApacheHive, etc) Hands on experience working with machine learning teams; understanding of the core concepts of model evaluation techniques and metrics, and suitable More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Captur
Required Qualifications SQL (BigQuery and PostgreSQL) proficiency and Python programming skills Experience with Google Cloud Platform Experience with big data warehouse systems (Google BigQuery, ApacheHive, etc) Hands on experience working with machine learning teams; understanding of the core concepts of model evaluation techniques and metrics, and suitable More ❯
london, south east england, united kingdom Hybrid / WFH Options
Captur
Required Qualifications SQL (BigQuery and PostgreSQL) proficiency and Python programming skills Experience with Google Cloud Platform Experience with big data warehouse systems (Google BigQuery, ApacheHive, etc) Hands on experience working with machine learning teams; understanding of the core concepts of model evaluation techniques and metrics, and suitable More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Captur
Required Qualifications SQL (BigQuery and PostgreSQL) proficiency and Python programming skills Experience with Google Cloud Platform Experience with big data warehouse systems (Google BigQuery, ApacheHive, etc) Hands on experience working with machine learning teams; understanding of the core concepts of model evaluation techniques and metrics, and suitable More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Captur
Required Qualifications SQL (BigQuery and PostgreSQL) proficiency and Python programming skills Experience with Google Cloud Platform Experience with big data warehouse systems (Google BigQuery, ApacheHive, etc) Hands on experience working with machine learning teams; understanding of the core concepts of model evaluation techniques and metrics, and suitable More ❯