on experience with building data pipelines in a programming language like Python Hands-on experience with building and maintaining Tableau dashboards and/or Jupyter reports Working understanding of Hadoop and Big data analytics Ability to understand the needs of and collaborate with stakeholders from analytics and business teams Education : Bachelors or Masters degree in Computer Science, Engineering, Management More ❯
leadership experience Proficiency in modern data platforms (Databricks, Snowflake, Kafka), container orchestration (Kubernetes/OpenShift), and multi-cloud deployments across AWS, Azure, GCP Advanced knowledge of Big Data ecosystems (Hadoop/Hive/Spark), data lakehouse architectures, mesh topologies, and real-time streaming platforms Strong Unix/Linux skills, database connectivity (JDBC/ODBC), authentication systems (LDAP, Active Directory More ❯
communication skills, including the ability to explain technical findings to non-technical stakeholders. Preferred Qualifications Master's or PhD in a quantitative discipline. Experience with big data tools (e.g., Hadoop, Spark) and cloud platforms. Familiarity with complex ETL pipelines. Experience with data visualization tools (e.g., Tableau, Power BI, matplotlib, seaborn). Proven ability to leverage GenAI via prompt engineering. More ❯
in the schedule. Must be willing/able to help open/close the workspace during regular business hours as needed. Preferred Requirements Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers, EKS, Diode, CI/CD, and Terraform are a plus. Compensation At IAMUS More ❯
in large ERP implementations Education Bachelor's degree in Computer Science, Engineering, or a related field. Advanced degree preferred. Technical Skills: Data engineering tools and technologies (e.g., SQL, Python, Hadoop, Spark). Cloud platforms (e.g., AWS, Azure, Google Cloud). Data modeling, ETL processes, EDI and data warehousing. Tableau and PowerBI Additional Information Supplemental Information This is a safety More ❯
days/week. Must be willing/able to help open/close the workspace during regular business hours as needed. Preferred Requirements Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers, EKS, Diode, CI/CD, and Terraform are a plus. We have many More ❯
days/week. Must be willing/able to help open/close the workspace during regular business hours as needed. Preferred Requirements Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers, EKS, Diode, CI/CD, and Terraform are a plus. We have many More ❯
able to work on-site 4-5 days/week. Flexibility is key to accommodate any schedules changes per the customer. Preferred Requirements Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers, EKS, Diode, CI/CD, and Terraform are a plus. Work could possibly More ❯
and entity resolution. Preferred Qualifications: Experience with visualization tools and techniques (e.g., Periscope, Business Objects, D3, ggplot, Tableau, SAS Visual Analytics, PowerBI). Experience with big data technologies (e.g., Hadoop, HIVE, HDFS, HBase, MapReduce, Spark, Kafka, Sqoop). Master's degree in mathematics, statistics, computer science/engineering, or other related technical fields with equivalent practical experience. Experience constructing More ❯
in the schedule. Must be willing/able to help open/close the workspace during regular business hours as needed. Preferred Requirements Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers, EKS, Diode, CI/CD, and Terraform are a plus. We have many More ❯
as Spring, Hibernate, or JPA. Experience with relational databases such as SQL Knowledge of version control systems, preferably Git. Experience with ETL processes and big data technologies (e.g., Spark, Hadoop, Kafka) for building data pipelines. Experience with data quality tools Knowledge of cloud platforms such as AWS, Google Cloud, or Azure. Preferred Qualifications: Experience with data anomaly detection. Past More ❯
San Antonio, Texas, United States Hybrid / WFH Options
Wyetech, LLC
skills. Understanding of AGILE software development methodologies and use of standard software development tool suites Desired Technical Skills Security+ certification is highly desired. Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers, EKS, Diode, CI/CD, and Terraform are a plus. Work could possibly More ❯
in automation tools and frameworks (e.g. Ansible , Terraform, Kubernetes , Docker) for automating system deployment and maintenance. Familiarity with modern data architectures and technologies, including big data platforms (e.g. Kafka, Hadoop, Spark), distributed storage (e.g. Cassandra, HDFS, AWS S3), etc. Extensive experience in data base management (e.g. NoSQL databases, MySQL, PostgreSQL). Programming Skills : Proficient in at least one programming More ❯
in automation tools and frameworks (e.g. Ansible , Terraform, Kubernetes , Docker) for automating system deployment and maintenance. Familiarity with modern data architectures and technologies, including big data platforms (e.g. Kafka, Hadoop, Spark), distributed storage (e.g. Cassandra, HDFS, AWS S3), etc. Extensive experience in data base management (e.g. NoSQL databases, MySQL, PostgreSQL). Programming Skills : Proficient in at least one programming More ❯
in automation tools and frameworks (e.g. Ansible , Terraform, Kubernetes , Docker) for automating system deployment and maintenance. Familiarity with modern data architectures and technologies, including big data platforms (e.g. Kafka, Hadoop, Spark), distributed storage (e.g. Cassandra, HDFS, AWS S3), etc. Extensive experience in data base management (e.g. NoSQL databases, MySQL, PostgreSQL). Programming Skills : Proficient in at least one programming More ❯
in automation tools and frameworks (e.g. Ansible , Terraform, Kubernetes , Docker) for automating system deployment and maintenance. Familiarity with modern data architectures and technologies, including big data platforms (e.g. Kafka, Hadoop, Spark), distributed storage (e.g. Cassandra, HDFS, AWS S3), etc. Extensive experience in data base management (e.g. NoSQL databases, MySQL, PostgreSQL). Programming Skills : Proficient in at least one programming More ❯
in automation tools and frameworks (e.g. Ansible, Terraform , Kubernetes , Docker) for automating system deployment and maintenance. Familiarity with modern data architectures and technologies, including big data platforms (e.g., Kafka, Hadoop, Spark), distributed storage (e.g., Cassandra, HDFS, AWS S3), etc. Extensive experience in data base management (e.g. NoSQL databases, MySQL, PostgreSQL). Programming Skills: Proficient in at least one programming More ❯
in automation tools and frameworks (e.g. Ansible, Terraform , Kubernetes , Docker) for automating system deployment and maintenance. Familiarity with modern data architectures and technologies, including big data platforms (e.g., Kafka, Hadoop, Spark), distributed storage (e.g., Cassandra, HDFS, AWS S3), etc. Extensive experience in data base management (e.g. NoSQL databases, MySQL, PostgreSQL). Programming Skills: Proficient in at least one programming More ❯
in automation tools and frameworks (e.g. Ansible, Terraform , Kubernetes , Docker) for automating system deployment and maintenance. Familiarity with modern data architectures and technologies, including big data platforms (e.g., Kafka, Hadoop, Spark), distributed storage (e.g., Cassandra, HDFS, AWS S3), etc. Extensive experience in data base management (e.g. NoSQL databases, MySQL, PostgreSQL). Programming Skills: Proficient in at least one programming More ❯
in automation tools and frameworks (e.g. Ansible, Terraform , Kubernetes , Docker) for automating system deployment and maintenance. Familiarity with modern data architectures and technologies, including big data platforms (e.g., Kafka, Hadoop, Spark), distributed storage (e.g., Cassandra, HDFS, AWS S3), etc. Extensive experience in data base management (e.g. NoSQL databases, MySQL, PostgreSQL). Programming Skills: Proficient in at least one programming More ❯
for the future of eBay's data platform infrastructure. We are seeking Data Platform Software Engineers, not Data Engineers. While familiarity with Spark, Flink, and other tools in the Hadoop environment is a definite advantage, your focus will be on building the data platform rather than just creating data pipelines. If you are an engineer eager to gain an More ❯
Falls Church, Virginia, United States Hybrid / WFH Options
Rackner
Rancher) with CI/CD Apply DevSecOps + security-first practices from design to delivery Tech You'll Touch AWS Python FastAPI Node.js React Terraform Apache Airflow Trino Spark Hadoop Kubernetes You Have Active Secret Clearance 3+ years in Agile, cloud-based data engineering Experience with API design, ORM + SQL, AWS data services Bonus: AI/ML, big More ❯
hands-on experience in programming and software development using Java, JavaScript, or Python. Demonstrated hands on experience working with PostgreSQL and Apache NiFi. Demonstrated hands-on experience working with Hadoop, Apache Spark and their related ecosystems. A candidate must be a US Citizen and requires an active/current TS/SCI with Polygraph clearance. Salary Range More ❯
Telegraf and others. Knowledge and demonstrable hands-on experience with middleware technologies (Kafka, API gateways and others) and Data Engineering tools/frameworks like Apache Spark, Airflow, Flink and Hadoop ecosystems. Understanding of network technology fundamentals, Data Structures, scalable system design and ability to translate information in a structured manner for wider product and engineering teams to translate into More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
The DarkStar Group
Python (Pandas, numpy, scipy, scikit-learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA and in various field offices throughout More ❯