Alexandria, Virginia, United States Hybrid / WFH Options
Metronome LLC
help open/close the workspace during regular business hours as needed Desired Skills Experience with big data technologies like: Hadoop, Spark, MongoDB, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers and Kubernetes are a plus All candidates will be required to be on-site at a More ❯
data structures. Encouraging self-learning among the team. Essential Skills & Qualifications: A confident engineer with an authoritative knowledge of Java and Hadoop including HDFS, Hive, and Spark. Comfortable working with large data volumes and able to demonstrate a firm understanding of logical data structures and analysis techniques. Strong skills More ❯
in coding languages e.g. Python, C++, etc.; (Python preferred). Proficiency in database technologies e.g. SQL, No-SQL and Big Data technologies e.g. pySpark, Hive, etc. Experience working with structured and unstructured data e.g. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and how More ❯
and building ETL pipelines - Experience with SQL - Experience mentoring team members on best practices PREFERRED QUALIFICATIONS - Experience with big data technologies such as Hadoop, Hive, Spark, EMR - Experience operating large data warehouses Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to More ❯
and PL/SQL in Oracle and know-how in MS SQL Database Experience with Big Data platforms/development (e.g. Hadoop, Spark, Impala, HIVE) Experience in data warehousing projects (as an advantage) Good analytical troubleshooting, problem-solving skills The ability to work independently with minimal supervision Good communication More ❯
and PL/SQL in Oracle and know-how in MS SQL Database Experience with Big Data platforms/development (e.g. Hadoop, Spark, Impala, HIVE) Experience in data warehousing projects (as an advantage) Good analytical troubleshooting, problem-solving skills The ability to work independently with minimal supervision Good communication More ❯
and PL/SQL in Oracle and know-how in MS SQL Database Experience with Big Data platforms/development (e.g. Hadoop, Spark, Impala, HIVE) Experience in data warehousing projects (as an advantage) Good analytical troubleshooting, problem-solving skills The ability to work independently with minimal supervision Good communication More ❯
/CD principles, methodologies, and tools, including GitLab CI/CD and Jenkins Experience with distributed data and computing tools, including Spark, Databricks, Hadoop, Hive, AWS EMR, or Kafka Experience leading a team of AI and ML engineers, researchers, and data scientists to develop and deploy advanced AI and More ❯
Torch, Tensor Flow, Caffe, Neon, NVIDIA CUDA Deep Neural Network library (cuDNN), and OpenCV) and distributed data processing frameworks (e.g. Hadoop (including HDFS, Hbase, Hive, Impala, Giraph, Sqoop), Spark (including MLib, GraphX, SQL and Dataframes). Execute data science method using common programming/scripting languages: Python, Java, Scala More ❯
experience working with relational and non-relational databases (e.g. Snowflake, BigQuery, PostgreSQL, MySQL, MongoDB). Hands-on experience with big data technologies such as Apache Spark, Kafka, Hive, or Hadoop. Proficient in at least one programming language (e.g. Python, Scala, Java, R). Experience deploying and maintaining cloud More ❯
and building ETL pipelines Experience with SQL Experience mentoring team members on best practices PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience operating large data warehouses Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to More ❯
and analysis as dashboard in tools such as R/Shiny, GGPlot, Tableau, Qlik and Custom web-based solutions. Familiarity with Amazon Web Services, Apache Spark, NumPy, TensorFlow etc. is a plus. Excellent written and verbal communications skills are required. Comfortable working in a fast paced, highly collaborative, dynamic More ❯
Naperville, Illinois, United States Hybrid / WFH Options
esrhealthcare
with the Technology stack available in the industry for data management, data ingestion, capture, processing and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc. Familiarity with Networking, Windows/Linux virtual machines, Container, Storage, ELB, AutoScaling is a plus Experience More ❯
engineers on the team to elevate technology and consistently apply best practices. Qualifications for Software Engineer Hands-on experience working with technologies like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume, etc. Strong DevOps focus and experience building and deploying infrastructure with cloud deployment technologies like Ansible More ❯
Experience in containerization technologies and container orchestration ( Kubernetes , OpenShift, Docker, Mesos, etc.) Experience with different distributed technologies (e.g. Spark, S3, Snowflake, DynamoDB, CockroachDB, HDFS, Hive, etc.) Experienced with Java/Go/Python/Scala/other languages Proficiency in English Team Data Reply, as part of the Reply More ❯
architecture using Spring Framework, Spring Boot, Tomcat, AWS, Docker Container or Kubernetes solutions. 5. Demonstrated experience in big data solutions (Hadoop Ecosystem, MapReduce, Pig, Hive, DataStax, etc.) in support of a screening and vetting mission. More ❯
PaaS services. developing and deploying web services. working with open-source resources in a government computing environment Big data technologies such as Accumulo, Spark, Hive, Hadoop, ElasticSearch Strong Linux skills and familiarity with hybrid cloud/on-prem architecture, AWS, C2S, OpenShift, etc. Can work independently in a fast More ❯
PaaS services. developing and deploying web services. working with open-source resources in a government computing environment Big data technologies such as Accumulo, Spark, Hive, Hadoop, ElasticSearch Strong Linux skills and familiarity with hybrid cloud/on-prem architecture, AWS, C2S, OpenShift, etc. Can work independently in a fast More ❯
a degree. To be considered must have an active TS/SCI with polygraph security clearance Preferred Qualifications Experience with Python, SpringBoot, Hibernate, Angular, Hive, Pig, or C++ Experience in AWS Data management services (Elastic Map Reduce, Lambda, Kinesis) Experience with SAFe development practices CABARESTON Original Posting: February More ❯
a degree. To be considered must have an active TS/SCI with polygraph security clearance Preferred Qualifications Experience with Python, SpringBoot, Hibernate, Angular, Hive, Pig, or C++ Experience in AWS Data management services (Elastic Map Reduce, Lambda, Kinesis) Experience with SAFe development practices CABARESTON Original Posting: February More ❯
ML flow and WB. Nice to Have: Strong knowledge and deep experience of toolchains Java SQL JavaScript D3js Bash. Data Processing Hadoop Spark Kafka Hive NumPy Pandas Matplotlib. Mandatory Certification: Microsoft Certified Azure Data Scientist Associate Benefits/perks listed below may vary depending on the nature of your More ❯
work experience in lieu of a degree. Clearance Required: Must have TS/SCI with Polygraph. Preferred Qualifications Experience with Python, SpringBoot, Hibernate, Angular, Hive, Pig, or C++ Experience in AWS Data management services (Elastic Map Reduce, Lambda, Kinesis) Experience with SAFe development practices At Leidos , the opportunities are More ❯