MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Amazon is an equal opportunity employer and does not discriminate More ❯
Alexandria, Virginia, United States Hybrid / WFH Options
Metronome LLC
help open/close the workspace during regular business hours as needed Desired Skills Experience with big data technologies like: Hadoop, Spark, MongoDB, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers and Kubernetes are a plus All candidates will be required to be on-site at a More ❯
Fort Belvoir, Virginia, United States Hybrid / WFH Options
Metronome LLC
help open/close the workspace during regular business hours as needed Desired Skills Experience with big data technologies like: Hadoop, Spark, MongoDB, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers and Kubernetes are a plus All candidates will be required to be on-site at a More ❯
languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and More ❯
Random Forest, graph models, Bayesian inference, NLP, Computer Vision, neural networks. (Required) 4-7 years of Experience with big data architecture and pipeline, Hadoop, Hive, Spark, Kafka, etc. (Required) 4-7 years of Experience in data visualization (Required) 2-4 years Management consulting, investment banking, or corporate strategy (Required More ❯
Random Forest, graph models, Bayesian inference, NLP, Computer Vision, neural networks. (Required) 4-7 years of Experience with big data architecture and pipeline, Hadoop, Hive, Spark, Kafka, etc. (Required) 4-7 years of Experience in data visualization (Required) 2-4 years Management consulting, investment banking, or corporate strategy (Required More ❯
London, England, United Kingdom Hybrid / WFH Options
Qh4 Consulting
C#) and willingness to interact with them where necessary. Exposure to Cloudera Data Platform or similar big data environments. Experience with tools such as ApacheHive, NiFi, Airflow, Azure Blob Storage, and RabbitMQ. Background in investment management or broader financial services, or a strong willingness to learn the More ❯
expertise in Cloudera Data Platform (CDP), Cloudera Manager, and Cloudera Navigator . Strong knowledge of Hadoop ecosystem and related technologies such as HDFS, YARN, Hive, Impala, Spark, and Kafka . Strong AWS services/Architecture experience with hands-on expertise in cloud-based deployments (AWS, Azure, or GCP) . More ❯
rapid prototyping and disciplined software development processes. Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.), data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras). Demonstrated ability to work on multi-disciplinary teams with diverse skillsets. More ❯
citizenship and an active TS/SCI with Polygraph security clearance required Desired Experience: Experience with distributed databases and streaming tools (Hadoop, Spark, Yarn, Hive, Trino) Experience with Remote Desktop Protocol (RDP) technologies Experience with data access control, specifically Role-Based Access Control (RBAC) and Attribute-Based Access Control More ❯
have 4+ years of relevant work experience in Analytics, Business Intelligence, or Technical Operations Master in SQL, Python, and ETL using big data tools (HIVE/Presto, Redshift) Previous experience with web frameworks for Python such as Django/Flask is a plus Experience writing data pipelines using Airflow More ❯
have 4+ years of relevant work experience in Analytics, Business Intelligence, or Technical Operations Master in SQL, Python, and ETL using big data tools (HIVE/Presto, Redshift) Previous experience with web frameworks for Python such as Django/Flask is a plus Experience writing data pipelines using Airflow More ❯
have 4+ years of relevant work experience in Analytics, Business Intelligence, or Technical Operations Master in SQL, Python, and ETL using big data tools (HIVE/Presto, Redshift) Previous experience with web frameworks for Python such as Django/Flask is a plus Experience writing data pipelines using Airflow More ❯
sales opportunities at client engagements An understanding of database technologies e.g. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. Hadoop, Mahout, Pig, Hive, etc.; An understanding of statistical modelling techniques e.g. Classification and regression techniques, Neural Networks, Markov chains, etc.; An understanding of cloud technologies e.g. AWS More ❯
languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and More ❯
sales opportunities at client engagements An understanding of database technologies e.g. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. Hadoop, Mahout, Pig, Hive, etc.; An understanding of statistical modelling techniques e.g. Classification and regression techniques, Neural Networks, Markov chains, etc.; An understanding of cloud technologies e.g. AWS More ❯
engineers on the team to elevate technology and consistently apply best practices. Qualifications for Software Engineer Hands-on experience working with technologies like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume, etc. Strong DevOps focus and experience building and deploying infrastructure with cloud deployment technologies like Ansible More ❯
structures and algorithms Experience working with at least one machine learning framework (TensorFlow, PyTorch, XGBoost, etc) Experience working with big data technologies (Spark, Kafka, Hive, Databricks, feature stores, etc). Experience working with containerisation, deployment and orchestration technologies (Docker, Kubernetes, Airflow, CI/CD pipelines, etc) Experience with automated More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
and training techniques . Experience deploying models in production environments. Nice to Have: Experience in GenAI/LLMs Familiarity with distributed computing tools (Hadoop, Hive, Spark). Background in banking, risk management, or capital markets . Why Join? This is a unique opportunity to work at the forefront of More ❯
MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results More ❯
architecture using Spring Framework, Spring Boot, Tomcat, AWS, Docker Container or Kubernetes solutions. 5. Demonstrated experience in big data solutions (Hadoop Ecosystem, MapReduce, Pig, Hive, DataStax, etc.) in support of a screening and vetting mission. More ❯
required Desired Qualifications: Familiarity with AWS CDK Terraform, Packer Design Concepts: REST APIs Programming Languages: JavaScript/NodeJS Processing Tools: Presto/Trino, MapReduce, Hive The Swift Group and Subsidiaries are an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to More ❯
Strong understanding of systems and infrastructure concepts. • Nice to have experience with: o MLOps collaboration and robust deployment pipelines. o Distributed systems like Cloudera, Hive, Spark, or Hadoop. o Document indexing/search platforms and GPU-based workloads. o Data tools such as R, SQL/NoSQL, EMR, and More ❯
City of London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
and training techniques . Experience deploying models in production environments. Nice to Have: Experience in GenAI/LLMs Familiarity with distributed computing tools (Hadoop, Hive, Spark). Background in banking, risk management, or capital markets . Why Join? This is a unique opportunity to work at the forefront of More ❯
level of competence in SQL, Python, Spark/Scala, and Unix/Linux scripts Real world experience using Hadoop and the related query engines (Hive/Impala) for big data processing Ability to construct model features utilizing open-banking data, in-house data,and/or third-party data More ❯