workloads Experience in the development of algorithms leveraging R, Python, or SQL/NoSQL Experience with Distributed data/computing tools, including MapReduce, Hadoop, Hive, EMR, Spark, Gurobi, or MySQL Experience with visualization packages, including Plotly, Seaborn, or ggplot2 Bachelor's degree More ❯
Desired Qualifications: Experience with AWS Data Management services (Elastic Map Reduce, Lambda, Kinesis). Experience with SAFe development practices. Experience with Python, SpringBoot, Hibernate, Hive, Pig, or C++. More ❯
end buildout from scratch by coordinating across multiple business and technology groups o Experience building complex single-page applications using Abinitio/Hadoop/Hive/Kafka/Oracle and modern MOM technologies o Experienced with Linux/Unix platform o Experience in SCMs like GIT; and tools like More ❯
or SaaS products and a good understanding of Digital Marketing and Marketing Technologies. Have experience working with Big Data technologies (such as Hadoop, MapReduce, Hive/Pig, Cassandra, MongoDB, etc) An understanding of web technologies such as Javascript, node.js and html. Some level of understanding or experience in AI More ❯
proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow Proficiency in Python and at least one More ❯
proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow Proficiency in Python and at least one More ❯
Centre of Excellence. Skills, knowledge and expertise: Deep expertise in the Databricks platform, including Jobs and Workflows, Cluster Management, Catalog Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts. Proven experience working with big data technologies, i.e., Databricks and Apache Spark. Proven experience More ❯
Centre of Excellence. Skills, knowledge and expertise: Deep expertise in the Databricks platform, including Jobs and Workflows, Cluster Management, Catalog Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts. Proven experience working with big data technologies, i.e., Databricks and Apache Spark. Proven experience More ❯
Hybrid Job Summary We are seeking a seasoned Principal Java Big Data Engineer with 4 to 30 years of deep expertise in Java, Azure, Apache Spark, Kafka, Azure DevOps, Hadoop, Angular, and PostgreSQL. This role involves architecting enterprise-grade data platforms, guiding technical strategy, mentoring engineering teams, and delivering … scalable solutions in a cloud-native environment. Key Responsibilities Strategic Leadership: Define and drive the technical vision for large-scale data platforms, leveraging Java, Apache Spark, Kafka, and Azure technologies. Enterprise Architecture: Architect and implement robust Big Data pipelines using Hadoop, Spark, and Kafka for real-time and batch … leading monitoring, troubleshooting, and incident resolution efforts. Technical Skills: Expert-level proficiency in Java (Spring Boot, Hibernate, or similar frameworks). Advanced expertise in Apache Spark and Kafka for distributed processing and real-time streaming. Comprehensive knowledge of Hadoop ecosystem and Big Data frameworks. Mastery of Azure services for More ❯
or Engineering - Strong experience with Python and R - A strong understanding of a number of the tools across the Hadoop ecosystem such as Spark, Hive, Impala & Pig - An expertise in at least one specific data science area such as text mining, recommender systems, pattern recognition or regression models - Previous More ❯
Data Scientist - skills in statistics, physics, mathematics, Computer Science, Engineering, Data Mining, Big Data (Hadoop, Hive, MapReduce) This is an exceptional opportunity to work as a Data Scientist within a global analytics team, utilizing various big data technologies to develop complex behavioral models, analyze customer uptake of products, and More ❯
development and deployment. Experience with open-source resources in government environments. Familiarity with GIS technologies, ICD 503, and big data tools like Hadoop, Spark, Hive, ElasticSearch. Knowledge of hybrid cloud/on-prem architectures, AWS, C2S, OpenStack. Certifications like Security+ or similar. Experience with military or intelligence systems is More ❯
methods using parallel computing frameworks (e.g., Deeplearning4j, Torch, TensorFlow, Caffe, Neon, NVIDIA cuDNN, OpenCV) and distributed data processing frameworks (e.g., Hadoop including HDFS, HBase, Hive, Impala, Giraph, Sqoop; Spark including MLib, GraphX, SQL, DataFrames). Proficient in programming/scripting languages such as Python, Java, Scala, and R (statistics More ❯
testing, and operations experience Bachelor's degree in computer science or equivalent 2+ years of big data technologies such as AWS, Hadoop, Spark, Pig, Hive, Lucene/SOLR, or Storm/Samza experience Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have More ❯
data science, machine learning algorithms, natural language processing, computer vision. - Experience designing and implementing information retrieval and web mining systems. Experience with MapReduce, Spark, Hive and Scala. - Knowledge of Linux/Unix and scripting on Perl/Ruby/Python. Amazon is committed to a diverse and inclusive workplace. More ❯
Herndon, Virginia, United States Hybrid / WFH Options
Maxar Technologies
services working with open-source resources in a government computing environment Maintaining backend GIS technologies ICD 503 Big data technologies such as Accumulo , Spark, Hive, Hadoop , or ElasticSearch F amiliarity with : hybrid cloud/on-prem architecture, AWS, C2S, and OpenStack . concepts such as Data visualization; Data management More ❯
London, England, United Kingdom Hybrid / WFH Options
Australian Investors Association Limited
Join to apply for the Lead Apache Hadoop Engineer role at Australian Investors Association 6 days ago Be among the first 25 applicants Join to apply for the Lead Apache Hadoop Engineer role at Australian Investors Association Get AI-powered advice on this job and more exclusive features. … Job ID: R0335074 Full/Part-Time: Full-time Regular/Temporary: Regular Listed: 2025-03-19 Location: London Position Overview Job Title Lead Apache Hadoop Engineer Location London Corporate Title Vice President Technology serves as the foundation of our entire organization. Our Technology, Data, and Innovation (TDI) strategy … programme + 2 days volunteering leave per year Your Key Responsibilities Develop robust architectures and designs for big data platform and applications within the Apache Hadoop ecosystem Implement and deploy big data platform and solutions on-premises and in hybrid cloud environments. Read, understand, and modify open-source code More ❯
actively contribute throughout the Agile development lifecycle , participating in planning, refinement, and review ceremonies. Key Responsibilities: Develop and maintain ETL pipelines in Databricks , leveraging Apache Spark and Delta Lake . Design, implement, and optimize data transformations and treatments for structured and unstructured data. Work with Hive Metastore and … technical impact assessments and rationales. Work within GitLab repository structures and adhere to project-specific processes. Required Skills and Experience: Strong expertise in Databricks , Apache Spark , and Delta Lake . Experience with Hive Metastore and Unity Catalog for data governance. Proficiency in Python, SQL, Scala , or other relevant More ❯
Experience with common data science toolkits, such as Python - Proficiency in using query languages such as SQL on a big data platform e.g. Hadoop, Hive - Good applied statistics skills, such as distributions, statistical testing, regression, etc. - Good scripting and programming skills It would be desirable for the successful candidate More ❯
London, England, United Kingdom Hybrid / WFH Options
PURVIEW
to work methodically with a high level of attention to detail Experience working with SQL or any big data technologies is a plus (Hadoop, Hive, Hbase, Scala, Spark etc) Good team player with a strong team ethos. #J-18808-Ljbffr More ❯
analytic tools like R & Python; & visualization tools like Tableau & Power BI. Exposure to cloud platforms and big data systems such as Hadoop HDFS, and Hive is a plus. Ability to work with IT and Data Engineering teams to help embed analytic outputs in business processes. Graduate in Business Analytics More ❯
comfortable to work with new clients Job Responsibilities: Highly experienced developing with Scala/Spark Experience of Java and multithreading Experience of Hadoop (HDFS, HIVE, IMPALA & HBASE) Experience of ETL/Data Engineering What We Offer Why work at GlobalLogic Our goal is to build an inclusive positive culture More ❯
mission critical data pipelines and ETL systems. 5+ years of hands-on experience with big data technology, systems and tools such as AWS, Hadoop, Hive, and Snowflake Expertise with common Software Engineering languages such as Python, Scala, Java, SQL and a proven ability to learn new programming languages Experience … visualizations skills to convey information and results clearly Experience with DevOps tools such as Docker, Kubernetes, Jenkins, etc. Experience with event messaging frameworks like Apache Kafka The hiring range for this position in Santa Monica, California is $136,038 to $182,490 per year, in Glendale, California is More ❯
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Citigroup Inc
analytical and quantitative skills; Data driven and results-oriented Experience with Core Java required (Spark a plus) Experience with SQL Experience working with Hadoop, Hive, Sqoop and other technologies in Cloudera's CDP distribution. Understanding of version control (git) Experience working as part of an agile team. Excellent written … and oral communication skills Technical Skills: Strong knowledge in Java Some knowledge inHadoop, hive, SQL, Spark Understanding of Unix Shell Scripting CI/CD Pipeline Maven or Gradle experience Predictive analytics (desirable) PySpark (desirable) Trade Surveillance domain knowledge (desirable) Education: Bachelor’s/University degree or equivalent experience What More ❯
Required Qualifications SQL (BigQuery and PostgreSQL) proficiency and Python programming skills Experience with Google Cloud Platform Experience with big data warehouse systems (Google BigQuery, ApacheHive, etc) Hands on experience working with machine learning teams; understanding of the core concepts of model evaluation techniques and metrics, and suitable More ❯