workloads Experience in the development of algorithms leveraging R, Python, or SQL/NoSQL Experience with Distributed data/computing tools, including MapReduce, Hadoop, Hive, EMR, Spark, Gurobi, or MySQL Experience with visualization packages, including Plotly, Seaborn, or ggplot2 Bachelor's degree More ❯
Desired Qualifications: Experience with AWS Data Management services (Elastic Map Reduce, Lambda, Kinesis). Experience with SAFe development practices. Experience with Python, SpringBoot, Hibernate, Hive, Pig, or C++. More ❯
end buildout from scratch by coordinating across multiple business and technology groups o Experience building complex single-page applications using Abinitio/Hadoop/Hive/Kafka/Oracle and modern MOM technologies o Experienced with Linux/Unix platform o Experience in SCMs like GIT; and tools like More ❯
or SaaS products and a good understanding of Digital Marketing and Marketing Technologies. Have experience working with Big Data technologies (such as Hadoop, MapReduce, Hive/Pig, Cassandra, MongoDB, etc) An understanding of web technologies such as Javascript, node.js and html. Some level of understanding or experience in AI More ❯
improve the relevance of ads shown to customers. We’re looking for strong Software Engineers that can build upon technologies such as Elasticsearch, Spark, Hive and Presto, as well as AWS services like Elastic Map Reduce (EMR), Redshift, Kinesis and DynamoDB to build the next generation of our analytics More ❯
proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow Proficiency in Python and at least one More ❯
proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow Proficiency in Python and at least one More ❯
Centre of Excellence. Skills, knowledge and expertise: Deep expertise in the Databricks platform, including Jobs and Workflows, Cluster Management, Catalog Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts. Proven experience working with big data technologies, i.e., Databricks and Apache Spark. Proven experience More ❯
Centre of Excellence. Skills, knowledge and expertise: Deep expertise in the Databricks platform, including Jobs and Workflows, Cluster Management, Catalog Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts. Proven experience working with big data technologies, i.e., Databricks and Apache Spark. Proven experience More ❯
best practices. Metadata Management Design and implement Data Catalogues and Metadata Repositories to support data discovery and lineage tracking. Use industry tools such as Apache Atlas, Hive Metastore, AWS Glue, or AWS DataZone to manage metadata assets. Data Architecture & Business Alignment Collaborate with stakeholders to translate business problems … and governance of data standards within an organisation. Experience designing or developing data catalogues and metadata repositories. Working knowledge of metadata management tools like Apache Atlas, Hive Metastore, AWS Glue, or AWS DataZone. Solid understanding of modern data architectures, with involvement in designing data lakes, data warehouses, pipelines More ❯
or Engineering - Strong experience with Python and R - A strong understanding of a number of the tools across the Hadoop ecosystem such as Spark, Hive, Impala & Pig - An expertise in at least one specific data science area such as text mining, recommender systems, pattern recognition or regression models - Previous More ❯
Data Scientist - skills in statistics, physics, mathematics, Computer Science, Engineering, Data Mining, Big Data (Hadoop, Hive, MapReduce) This is an exceptional opportunity to work as a Data Scientist within a global analytics team, utilizing various big data technologies to develop complex behavioral models, analyze customer uptake of products, and More ❯
Herndon, Virginia, United States Hybrid / WFH Options
Maxar Technologies
services working with open-source resources in a government computing environment Maintaining backend GIS technologies ICD 503 Big data technologies such as Accumulo , Spark, Hive, Hadoop , or ElasticSearch F amiliarity with : hybrid cloud/on-prem architecture, AWS, C2S, and OpenStack . concepts such as Data visualization; Data management More ❯
traffic management sensors and other data sources Experience working with large data sets, experience working with distributed computing a plus (Map/Reduce, Hadoop, Hive, Apache Spark, etc.) Develop advanced data reporting capabilities and data visualizations; combining multiple disparate data sources to gain detailed insights into photo enforcement More ❯
Experience with common data science toolkits, such as Python - Proficiency in using query languages such as SQL on a big data platform e.g. Hadoop, Hive - Good applied statistics skills, such as distributions, statistical testing, regression, etc. - Good scripting and programming skills It would be desirable for the successful candidate More ❯
comfortable to work with new clients Job Responsibilities: Highly experienced developing with Scala/Spark Experience of Java and multithreading Experience of Hadoop (HDFS, HIVE, IMPALA & HBASE) Experience of ETL/Data Engineering What We Offer Why work at GlobalLogic Our goal is to build an inclusive positive culture More ❯
analytic tools like R & Python; & visualization tools like Tableau & Power BI. Exposure to cloud platforms and big data systems such as Hadoop HDFS, and Hive is a plus. Ability to work with IT and Data Engineering teams to help embed analytic outputs in business processes. Graduate in Business Analytics More ❯
as well as four (4) years of experience with the Map Reduce programming model, the Distributed File System (HDFS), and technologies such as Hadoop, Hive, Pig, etc. Shall have demonstrated work experience with Serialization, such as JSON and/or BSON. Shall have demonstrated work experience with developing restful More ❯
mission critical data pipelines and ETL systems. 5+ years of hands-on experience with big data technology, systems and tools such as AWS, Hadoop, Hive, and Snowflake Expertise with common Software Engineering languages such as Python, Scala, Java, SQL and a proven ability to learn new programming languages Experience … visualizations skills to convey information and results clearly Experience with DevOps tools such as Docker, Kubernetes, Jenkins, etc. Experience with event messaging frameworks like Apache Kafka The hiring range for this position in Santa Monica, California is $136,038 to $182,490 per year, in Glendale, California is More ❯
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Citigroup Inc
analytical and quantitative skills; Data driven and results-oriented Experience with Core Java required (Spark a plus) Experience with SQL Experience working with Hadoop, Hive, Sqoop and other technologies in Cloudera's CDP distribution. Understanding of version control (git) Experience working as part of an agile team. Excellent written … and oral communication skills Technical Skills: Strong knowledge in Java Some knowledge inHadoop, hive, SQL, Spark Understanding of Unix Shell Scripting CI/CD Pipeline Maven or Gradle experience Predictive analytics (desirable) PySpark (desirable) Trade Surveillance domain knowledge (desirable) Education: Bachelor’s/University degree or equivalent experience What More ❯
Required Qualifications SQL (BigQuery and PostgreSQL) proficiency and Python programming skills Experience with Google Cloud Platform Experience with big data warehouse systems (Google BigQuery, ApacheHive, etc) Hands on experience working with machine learning teams; understanding of the core concepts of model evaluation techniques and metrics, and suitable More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Captur
Required Qualifications SQL (BigQuery and PostgreSQL) proficiency and Python programming skills Experience with Google Cloud Platform Experience with big data warehouse systems (Google BigQuery, ApacheHive, etc) Hands on experience working with machine learning teams; understanding of the core concepts of model evaluation techniques and metrics, and suitable More ❯
London, England, United Kingdom Hybrid / WFH Options
Captur
Required Qualifications SQL (BigQuery and PostgreSQL) proficiency and Python programming skills Experience with Google Cloud Platform Experience with big data warehouse systems (Google BigQuery, ApacheHive, etc) Hands on experience working with machine learning teams; understanding of the core concepts of model evaluation techniques and metrics, and suitable More ❯
scalable Big Data Store (NoSQL) such as Hbase, CloudBase/Acumulo, Big Table, etc. o Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc. o Hadoop Distributed File System (HDFS) o Serialization such as JSON and/or BSON • 4 years of SWE experience may be More ❯
techniques (logit, GLM, time series, decision trees, random forests, clustering), statistical programming languages (SAS, R, Python, Matlab), and big data tools and platforms (Hadoop, Hive, etc.). Solid academic record. Strong computer skills. Knowledge of other languages is desirable. Proactive attitude, maturity, responsibility, and strong work ethic. Quick learner. More ❯