Chantilly, Virginia, United States Hybrid / WFH Options
Noblis
domain Experience with monitoring, logging, and alerting tools (e.g., Prometheus, Grafana). Experience querying databases (SQL, Hive). Experience working with data platforms like Hadoop and Spark. Overview Noblis and our wholly owned subsidiaries, Noblis ESI , and Noblis MSD tackle the nation's toughest problems and apply advanced solutions More ❯
MongoDB, Cassandra). • In-depth knowledge of data warehousing concepts and tools (e.g., Redshift, Snowflake, Google BigQuery). • Experience with big data platforms (e.g., Hadoop, Spark, Kafka). • Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Google Cloud). • Expertise in ETL tools and processes (e.g. More ❯
Experience with ETL processes and tools. Knowledge of cloud platforms (e.g., GCP, AWS, Azure) and their data services. Familiarity with big data technologies (e.g., Hadoop, Spark) is a plus. Understanding of AI tools like Gemini and ChatGPT is also a plus. Excellent problem-solving and communication skills. Ability to More ❯
tools like QGIS, ArcGIS. Experience with geospatial databases (e.g., PostGIS, GeoServer) and cloud platforms (e.g., AWS, Azure). Familiarity with big data technologies (e.g., Hadoop, Spark) is an advantage. Strong understanding of spatial data structures, algorithms, and analysis techniques. Problem-Solving: Strong analytical and problem-solving skills with a More ❯
data architecture, data modelling, ETL/ELT processesand data pipeline development. Competency with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark, Kafka etc). Excellent communication and leadership skills, with the ability to engage and influence stakeholders at all levels. Insightful problem-solving skills More ❯
Thornton, yorkshire and the humber, united kingdom
Victrex Manufacturing Ltd
data architecture, data modelling, ETL/ELT processesand data pipeline development. Competency with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark, Kafka etc). Excellent communication and leadership skills, with the ability to engage and influence stakeholders at all levels. Insightful problem-solving skills More ❯
Thornton-Cleveleys, Lancashire, North West, United Kingdom
Victrex Manufacturing Ltd
data architecture, data modelling, ETL/ELT processesand data pipeline development. Competency with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark, Kafka etc). Excellent communication and leadership skills, with the ability to engage and influence stakeholders at all levels. Insightful problem-solving skills More ❯
data architecture, data modelling, ETL/ELT processesand data pipeline development. Competency with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark, Kafka etc). Excellent communication and leadership skills, with the ability to engage and influence stakeholders at all levels. Insightful problem-solving skills More ❯
data architecture, data modelling, ETL/ELT processesand data pipeline development. Competency with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark, Kafka etc). Excellent communication and leadership skills, with the ability to engage and influence stakeholders at all levels. Insightful problem-solving skills More ❯
data architecture, data modelling, ETL/ELT processesand data pipeline development. Competency with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark, Kafka etc). Excellent communication and leadership skills, with the ability to engage and influence stakeholders at all levels. Insightful problem-solving skills More ❯
implementing cloud based data solutions using AWS services such as EC2, S3, EKS, Lambda, API Gateway, Glue and bid data tools like Spark, EMR, Hadoop etc. Hands on experience on data profiling, data modeling and data engineering using relational databases like Snowflake, Oracle, SQL Server; ETL tools like Informatica More ❯
schemas for efficient querying. Implementing ETL/ELT pipelines to load and transform data in Snowflake. Big Data Processing Frameworks : Familiarity with Apache Spark , Hadoop, or other distributed data processing frameworks. Data Governance and Compliance : Understanding of data governance principles , security policies, and compliance standards (e.g., GDPR, HIPAA). More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
Aerospace Corporation
and guiding teams toward software development best practices Experience in SQL, NoSQL, Cypher and other big data querying languages Experience with big data frameworks (Hadoop, Spark, Flink etc.) Experience with ML lifecycle management tools (MLflow, Kubeflow, etc.) Familiarity with data pipelining and streaming technologies (Apache Kafka, Apache Nifi, etc. More ❯
Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Familiarity with data processing tools and platforms (e.g., SQL, Apache Spark, Hadoop). Knowledge of cloud computing services (e.g., AWS, Google Cloud, Azure) and containerization technologies (e.g., Docker, Kubernetes) is a plus. Hugging Face Ecosystem: Demonstrated More ❯
experience in data engineering, including working with AWS services. Proficiency in AWS services like S3, Glue, Redshift, Lambda, and EMR. Knowledge of Cloudera-based Hadoop is a plus. Strong ETL development skills and experience with data integration tools. Knowledge of data modeling, data warehousing, and data transformation techniques. Familiarity More ❯
assurance measures. • Proven experience with cloud-based data platforms Azure and Databricks • Deep understanding of data modeling, data warehousing, and big data technologies (e.g., Hadoop, Spark). • Proficiency in programming languages such as Python, Java, or Scala. • Exceptional problem-solving abilities with keen attention to detail and data accuracy. More ❯
deployments, and experience in deploying and managing ML services on these platforms. Knowledge of distributed computing frameworks (e.g., Spark) and big data technologies (e.g., Hadoop, Kafka). Proficiency in Python, Shell, Ruby, Golang, or C++ and experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Hands-on experience More ❯
deployments, and experience in deploying and managing ML services on these platforms. Knowledge of distributed computing frameworks (e.g., Spark) and big data technologies (e.g., Hadoop, Kafka). Proficiency in Python, Shell, Ruby, Golang, or C++ and experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Hands-on experience More ❯
deployments, and experience in deploying and managing ML services on these platforms. Knowledge of distributed computing frameworks (e.g., Spark) and big data technologies (e.g., Hadoop, Kafka). Proficiency in Python, Shell, Ruby, Golang, or C++ and experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Hands-on experience More ❯
deployments, and experience in deploying and managing ML services on these platforms. Knowledge of distributed computing frameworks (e.g., Spark) and big data technologies (e.g., Hadoop, Kafka). Proficiency in Python, Shell, Ruby, Golang, or C++ and experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Hands-on experience More ❯
Experience completing Databricks development and/or administrative tasks • Familiarity with some of these tools: DB2, Oracle, SAP, Postgres, Elastic Search, Glacier, Cassandra, DynamoDB, Hadoop, Splunk, SAP HANA, Databricks • Experience working with federal government clients Security Clearance: Active CBP Public Trust required SALARY RANGE: $130,000 to More ❯
/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting languages (e.g., Python, KornShell) Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best More ❯
techniques. Hands-on experience with data manipulation and analysis using libraries like Pandas, NumPy, Scikit-learn, etc. Familiarity with big data technologies such as Hadoop, Spark, or similar. Experience with databases (SQL, NoSQL) and data extraction techniques. Familiarity with cloud platforms such as AWS, GCP, or Azure is a More ❯
data processing frameworks, such as SQL, Python, R, or other programming languages used for data manipulation and analysis. Experience with data management platforms (e.g., Hadoop, AWS, Google Cloud Platform, or Azure). Knowledge of data security and compliance regulations (e.g., GDPR, CCPA). Strong ability to analyze large datasets More ❯
science, mathematics, or a related quantitative field - Experience with scripting languages (e.g., Python, Java, R) and big data technologies/languages (e.g. Spark, Hive, Hadoop, PyTorch, PySpark) PREFERRED QUALIFICATIONS - Master's degree, or Advanced technical degree - Knowledge of data modeling and data pipeline design - Experience with statistical analysis, co More ❯