and vetting mission. 4. Demonstrated experience in micro service architecture using Spring Framework, Spring Boot, Tomcat, AWS, Docker Container or Kubernetes solutions. 5. Demonstrated experience in big data solutions (Hadoop Ecosystem, MapReduce, Pig, Hive, DataStax, etc.) in support of a screening and vetting mission. More ❯
environment) Clojure or similar languages (e.g. Java or Python) Software collaboration and revision control (e.g. Git or SVN) Desired skills and experiences: ElasticSearch/Kibana Cloud computing (e.g. AWS) Hadoop/Spark etc. Graph Databases Educational level: Master Degree Tagged as: Clustering , Data Mining , Industry , Information Retrieval , Master Degree , Sentiment Analysis , United Kingdom More ❯
and continuous delivery Excellent problem-solving skills and a collaborative mindset Agile development experience in a team setting Bonus Skills (nice to have) Experience with big data tools like Hadoop, Spark, or Scala Exposure to fraud, payments , or financial services platforms Understanding of cloud-native development and container orchestration Knowledge of test-driven development and modern code quality practices More ❯
and permissions - Experience with non-relational databases/data stores (object storage, document or key-value stores, graph databases, column-family databases) - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during More ❯
search, GPU workloads, and distributed storage, including Cloudera • Experience in the development of algorithms leveraging R, Python, SQL, or NoSQL • Experience with Distributed data or computing tools, including MapReduce, Hadoop, Hive, EMR, Spark, Gurobi, or MySQL • Experience with visualization packages, including Plotly, Seaborn, or ggplot2 About Blue Sky Blue Sky Innovative Solutions (Blue Sky) assists its federal, state and More ❯
Data Science, or Physics. • Technical Skills: Proficiency with core technical concepts, including data structures, storage systems, cloud infrastructure, and front-end frameworks. Also, familiarity with technologies like Oracle, PostgreSQL, Hadoop, Spark, AWS, or Azure. • Programming Proficiency: Expertise in programming languages such as Java, C++, Python, JavaScript, or similar. • User-Centered Approach: Understanding of how technical decisions directly impact user More ❯
e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If More ❯
years of relevant work experience in solving real world business problems using machine learning, deep learning, data mining and statistical algorithms - Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory - Strong analytical thinking - Ability to creatively solve business problems, innovating new approaches where required and articulating More ❯
years of relevant work experience in solving real world business problems using machine learning, deep learning, data mining and statistical algorithms - • Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory - • Strong analytical thinking - • Ability to creatively solve business problems, innovating new approaches where required and articulating More ❯
years of relevant work experience in solving real world business problems using machine learning, deep learning, data mining and statistical algorithms • Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory • Strong analytical thinking • Ability to creatively solve business problems, innovating new approaches where required and articulating More ❯
years of relevant work experience in solving real world business problems using machine learning, deep learning, data mining and statistical algorithms - • Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory - • Strong analytical thinking - • Ability to creatively solve business problems, innovating new approaches where required and articulating More ❯
Extensive knowledge of and experience with large-scale database technology (e.g. Oracle, Teradata, Netezza, Greenplum, etc.) Experience with non-relational platforms and tools for large-scale data processing (e.g. Hadoop, HBase) Familiarity and experience with common data integration and data transformation tools (e.g. Informatica, DataStage, Talend, Matillion) Familiarity and experience with common BI and data exploration tools (e.g. Microstrategy More ❯
BASIC QUALIFICATIONS - 3+ years of experience in cloud architecture and implementation - Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience - Experience in database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) - Experience in consulting, design and implementation of serverless distributed solutions - Experience in software development with object oriented language PREFERRED QUALIFICATIONS - AWS experience preferred, with proficiency in More ❯
such as Python, Java, or Scala, and experience with ML frameworks like TensorFlow, PyTorch, or scikit-learn Experience with large-scale distributed systems and big data technologies (e.g., Spark, Hadoop, Kafka) Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach More ❯
QUALIFICATIONS - 10+ years of IT platform implementation in a technical and analytical role experience. - Experience in consulting, design and implementation of serverless distributed solutions. - Experienced in databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) and managing complex, large-scale customer-facing projects. - Experienced as a technical specialist in design and architecture, with expertise in cloud-based solutions (AWS or equivalent More ❯
process Ability to transform business processes using BPA, RPA, or other technology-enabled automation Skilled in creating and managing databases with the use of relevant software such as MySQL, Hadoop, or MongoDB Skilled in discovering patterns in large data sets with the use of relevant software such as Oracle Data Mining or Informatica Adept at managing project plans, resources More ❯
External Description Reach beyond with Liberty IT; for this is where you'll find the super challenges, where you'll be given the scope and the support to go further, dig deeper and fly higher. We won't stand over More ❯
individual must demonstrate the ability to apply logical thinking and creative problem-solving to develop, enhance, or troubleshoot software. Required Skills: • Extensive experience with Big Data • Experience with ApacheHadoop or AWS • Experience with relational databases • Expertise in Java full-stack development • Knowledge of Telemetry systems and processing • Experience with systems integration • Experience with anomaly investigation and resolution • Willingness More ❯
need for data governance and metadata management skills. Ability to communicate effectively with diverse stakeholders, both technical and non-technical, across different seniority levels. Desired Skills & Certifications: Experience with Hadoop technologies and integrating them into ETL (Extract, Transform, Load) data pipelines. Familiarity with Apache Tika for metadata and text extraction from various document types. Experience as a Data Layer More ❯
heterogenous data sources. • Good knowledge of warehousing and ETLs. Extensive knowledge of popular database providers such as SQL Server, PostgreSQL, Teradata and others. • Proficiency in technologies in the ApacheHadoop ecosystem, especially Hive, Impala and Ranger • Experience working with open file and table formats such Parquet, AVRO, ORC, Iceberg and Delta Lake • Extensive knowledge of automation and software development More ❯
scalable Big Data Store (NoSQL) such as Hbase, CloudBase/Acumulo, Big Table, etc.; Shall have demonstrated work experience with the Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc.; Shall have demonstrated work experience with the Hadoop Distributed File System (HDFS); Shall have demonstrated work experience with Serialization such as JSON and/or More ❯
Linux, GitHub, Continuous Integration, Cloud technologies, Virtualisation Tools, Monitoring utilities, Disaster recovery process/tools Experience in troubleshooting and problem resolution Experience in System Integration Knowledge of the following: Hadoop, Flume, Sqoop, Map Reduce, Hive/Impala, Hbase, Kafka, Spark Streaming Experience of ETL tools incorporating Big Data Shell Scripting, Python Beneficial Skills: Understanding of: LAN, WAN, VPN and … SD Networks Hardware and Cabling set-up experience Experience of implementing and supporting Big Data analytics platforms built on top of Hadoop Knowledge and appreciation of Information security If you are looking for a challenging role in an exciting environment, then please do not hesitate to apply More ❯
to technical requirements and implementation. Experience of Big Data technologies/Big Data Analytics. C++, Java, Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for More ❯
to technical requirements and implementation. Experience of Big Data technologies/Big Data Analytics. C++, Java, Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for More ❯
to technical requirements and implementation. Experience of Big Data technologies/Big Data Analytics. C++, Java, Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for More ❯