Bachelor's degree Nice If You Have: Experience in the development of algorithms leveraging R, Python, or SQL/NoSQL Experience with distributed data and computing tools, including MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL Experience with visualization packages, including Plotly, Seaborn, or ggplot2 Experience effectively managing teams and trusted client relationships Experience developing visually compelling PowerPoint More ❯
Hanover, Maryland, United States Hybrid / WFH Options
Enlighten, an HII - Mission Technologies Company
Will have some ability to work from home time to time. Flexibility is essential to accommodate any changes in the schedule. Preferred Requirements Experience with big data technologies like: Hadoop, Spark, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers EKS, Diode, CI/CD, and Terraform are a plus Work could possibly require some on-call More ❯
database technologies (PostgreSQL, MySQL, RDS) US citizenship and an active TS/SCI with Full Scope Polygraph security clearance required Desired Experience: Experience with distributed databases and streaming tools (Hadoop, Spark, Yarn, Hive, Trino) Experience with Remote Desktop Protocol (RDP) technologies Experience with data access control, specifically Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC) Familiarity More ❯
data science method using parallel computing frameworks (e.g. deeplearing4j, Torch, Tensor Flow, Caffe, Neon, NVIDIA CUDA Deep Neural Network library (cuDNN), and OpenCV) and distributed data processing frameworks (e.g. Hadoop (including HDFS, Hbase, Hive, Impala, Giraph, Sqoop), Spark (including MLib, GraphX, SQL and Dataframes). Execute data science method using common programming/scripting languages: Python, Java, Scala, R More ❯
systems engineering lifecycle. Strong communication skills to translate stakeholder requirements into system use-cases. Experience with visualization tools (e.g., Tableau, D3, ggplot). Experience utilizing multiple big data technologies: Hadoop, Hive, HDFS, HBase, MapReduce, Spark, Kafka, Sqoop. Experience with SQL, Spark, ETL. Experience with extracting, cleaning, and transforming large transactional datasets to build predictive models and generate supporting documentation. More ❯
Excellent oral and written communication skills. Understanding of AGILE software development methodologies and use of standard software development tool suites. Desired Technical Skills Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers, EKS, Diode, CI/CD, and Terraform are a plus. The Benefits Package More ❯
hands-on experience in programming and software development using Java, JavaScript, or Python. Demonstrated hands on experience working with PostgreSQL and Apache Cassandra. Demonstrated hands-on experience working with Hadoop, Apache Spark and their related ecosystems. Salary Range: $175,000-$200,000 Equal Opportunity Employer/Individuals with Disabilities/Protected Veterans More ❯
or Tableau • Experience supporting the development of AI/ML algorithms, such as natural language processing in a production environment • Experience configuring and utilizing data management tools, such as Hadoop, MapReduce, or similar. • Ability to translate complex, technical findings into an easily understood summary in graphical, verbal, or written forms • Must have an active TS/SCI with Favorable More ❯
right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: Skills Desired ApacheHadoop, Applied Mathematics, Big Data, Curiosity, Data Governance, Data Literacy, Data Management, Data Quality, Data Science, Data Strategy, Data Visualization, Deep Learning, Machine Learning (Ml), Machine Learning Algorithms, Master Data More ❯
What You'll Bring • 6 to 10 years' IT Architecture experience working in a software development, technical project management, digital delivery, or technology consulting environment • Platform implementation experience (ApacheHadoop - Kafka - Storm and Spark, Elasticsearch and others) • Experience around data integration & migration, data governance, data mining, data visualisation, database modelling in an agile delivery-based environment • Experience with at More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
The DarkStar Group
Python (Pandas, numpy, scipy, scikit-learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA and in various field offices throughout More ❯
Herndon, Virginia, United States Hybrid / WFH Options
The DarkStar Group
Python (Pandas, numpy, scipy, scikit-learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA and in various field offices throughout More ❯
Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of 12 months ending December 2024 totaled $13.8 billion. Experience : Minimum 10+ Years Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Specialization in designing and implementing large-scale data pipelines, ETL processes, and distributed systems Should be able to work independenty with minimal help/guidance More ❯
Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of 12 months ending December 2024 totaled $13.8 billion. Experience : Minimum 10+ Years Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Specialization in designing and implementing large-scale data pipelines, ETL processes, and distributed systems Should be able to work independenty with minimal help/guidance More ❯
Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of 12 months ending December 2024 totaled $13.8 billion. Experience : Minimum 10+ Years Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Specialization in designing and implementing large-scale data pipelines, ETL processes, and distributed systems Should be able to work independenty with minimal help/guidance More ❯
Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of 12 months ending December 2024 totaled $13.8 billion. Experience : Minimum 10+ Years Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Specialization in designing and implementing large-scale data pipelines, ETL processes, and distributed systems Should be able to work independenty with minimal help/guidance More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Lorien
Provide Environment Management representation in daily scrums, working groups, and ad-hoc meetings. Required Skillsets: Strong skills and experience with data technologies such as IBM DB2, Oracle, MongoDB, Hive, Hadoop, SQL, Informatica, and similar tech stacks. Attention to detail and strong ability to work independently and navigate complex target end state architecture (Tessa). Strong knowledge and experience with More ❯
4+ years of relevant experience in lieu of degree) One Active Certification: CCISO, CISM, CISSP, GSLC, SSCP or GSEC Expertise in designing, implementing, and managing Big Data solutions using Hadoop, Spark, and data streaming technologies Proven experience optimizing data pipelines, performing large-scale data processing, and ensuring data quality Strong knowledge of data warehousing concepts, ETL processes, and distributed More ❯
in scripting (Python, Bash) and programming (Java). Hands-on experience with DevOps tools : GitLab, Ansible, Prometheus, Grafana, Nagios, Argo CD, Rancher, Harbour. Deep understanding of big data technologies : Hadoop, Spark, and NoSQL databases. Nice to Have Familiarity with agile methodologies (Scrum or Kanban). Strong problem-solving skills and a collaborative working style. Excellent communication skills , with the More ❯
languages (Python, Bash) and programming languages (Java). Hands-on experience with DevOps tools : GitLab, Ansible, Prometheus, Grafana, Nagios, Argo CD, Rancher, Harbour. Deep understanding of big data technologies : Hadoop, Spark, and NoSQL databases. Nice to Have Familiarity with agile methodologies (Scrum or Kanban). Strong problem-solving skills and a collaborative working style. Excellent communication skills , with the More ❯
PyTorch, scikit-learn). Experience with cloud-based AI platforms (e.g., AWS SageMaker, Google Cloud AI Platform, Azure Machine Learning). Experience with data management and processing tools (e.g., Hadoop, Spark, SQL). Proficiency in programming languages such as Python and Java. Experience with DevOps practices and tools (e.g., CI/CD, containerization). Desired Qualifications: Experience with machine More ❯
REST, XML, UML) Databases: Oracle, Postgres, SQL, PL/SQL CI/CD pipelines with Docker, Git, Jenkins, Kubernetes, Kafka, Zookeeper, Consul Exposure to big data technologies (e.g., Elasticsearch, Hadoop ecosystem, Spark, Hive, Kafka). Familiarity with containerization/configuration tools (Docker, Chef). BS in Software Engineering, Computer Science, or related field. More ❯
indexing, search platforms, GPU workloads, and distributed storage (e.g., Cloudera). Experience developing algorithms with R, Python, SQL, or NoSQL. Knowledge of distributed data and computing tools such as Hadoop, Hive, Spark, MapReduce, or EMR. Hands-on experience with visualization tools like Plotly, Seaborn, or ggplot2. Security+ certification. More ❯