implementation of modern data architectures (ideally Azure, AWS. Microsoft Fabric, GCP, Data Factory) and modern data warehouse technologies (Snowflake, Databricks) Experience with database technologies such as SQL, NoSQL, Oracle, Hadoop, or Teradata Ability to collaborate within and across teams of different technical knowledge to support delivery and educate end users on data products Expert problem-solving skills, including debugging More ❯
time streaming processing as well as high volume batch processing, and skilled in Advanced SQL, GCP BigQuery, Apache Kafka, Data-Lakes, etc. Hands-on Experience in Big Data technologies - Hadoop, Hive, and Spark, and an enterprise-scale Customer Data Platform (CDP) Experience in at least one programming language (Python strongly preferred), cloud computing platforms (e.g., GCP), big data tools More ❯
a certification relevant to the product being deployed and/or maintained. 5-7 years direct experience in Data Engineering with experience in tools such as: Big data tools: Hadoop, Spark, Kafka, etc. Relational SQL and NoSQL databases, including Postgres and Cassandra. Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. AWS cloud services: EC2, EMR, RDS, Redshift More ❯
of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and More ❯
Base Programming, Advanced Programming, Data Integration) Experience with SAS Visual Analytics and SAS Enterprise Miner Knowledge of machine learning frameworks and advanced statistical modeling Familiarity with big data technologies (Hadoop, Spark, NoSQL databases) Experience with version control systems (Git) and collaborative development practices Understanding of data privacy regulations and compliance frameworks Experience with the Department of Defense or similarly More ❯
for operational decision support and analyzing unstructured data (e.g., text, imagery) Ability to architect and maintain scalable data lakes, data warehouses, or distributed storage systems (e.g., Delta Lake, Snowflake, Hadoop, or NoSQL solutions) Demonstrated understanding of data security, privacy, and sovereignty issues, particularly in military or international environments, ensuring compliance with NATO operational and ethical standards Experience building visually More ❯
including data scientists, DBAs, and business analysts Strong verbal and written communication skills for documenting and presenting solutions Desired Qualifications: Experience integrating Oracle databases with big data platforms (e.g., Hadoop, Spark, or cloud-based data lakes) Familiarity with Oracle Big Data SQL or Oracle Autonomous Database Exposure to cloud platforms such as Oracle Cloud Infrastructure (OCI), AWS, or Azure More ❯
including data scientists, DBAs, and business analysts. Strong verbal and written communication skills for documenting and presenting solutions Preferred Qualifications: Experience integrating Oracle databases with big data platforms (e.g., Hadoop, Spark, or cloud-based data lakes) Familiarity with Oracle Big Data SQL or Oracle Autonomous Database. Exposure to cloud platforms such as Oracle Cloud Infrastructure (OCI), AWS, or Azure More ❯
basic commands and Shell scripting Experience with a public cloud, including AWS, Micro sof t Azure, or Google Cloud Experience with distributed data or computing tools, including Spark, Databricks, Hadoop, Hive, AWS EMR, or Kafka Experience working on real-time data and streaming applications Experience with NoSQL implementation, including MongoDB or Cassandra Experience with data warehousing using AWS Redshift More ❯
agile development methodologies. Preferred Qualifications Familiarity with event-driven architecture and messaging systems (Kafka, RabbitMQ). Experience with feature stores and model registries. Familiarity with big data technologies (Spark, Hadoop) Knowledge of monitoring and logging tools for machine learning models (Prometheus, Grafana, ELK stack). Significant experience with petabyte scale data sets. Significant experience with large-scale, multi-INT More ❯
Stay abreast of emerging technologies and best practices in data engineering, cloud computing, and the space industry.Desired Qualifications: Experience with other big data technologies (e.g., Apache Kafka, Apache Spark, Hadoop). Familiarity with cloud platforms (AWS, Azure, GCP) and their data services. Experience with containerization technologies (Docker, Kubernetes). Knowledge of data warehousing concepts and methodologies. Experience with version More ❯
including Python, SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ year experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years More ❯
including Python, SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ year experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years More ❯
including Python, SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ year experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years More ❯
including Python, SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ year experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years More ❯
including Python, SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ year experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years More ❯
including Python, SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ year experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years More ❯
including Python, SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ year experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years More ❯
Elizabeth, New Jersey, United States Hybrid / WFH Options
ALTA IT Services
of experience may be considered in lieu of a degree. Proficiency in programming languages like Python or Java, strong SQL skills, and knowledge of big data tools like ApacheHadoop, Spark, or Kafka. Experience with cloud platforms (AWS, Azure, GCP) and data warehousing solutions (Snowflake, Redshift, BigQuery) Self-driven and have demonstrated the ability to work independently with minimum More ❯
Machine Learning, Deep Learning or LLM Frameworks) Desirable Minimum 2 years experience in Data related field Minimum 2 years in a Business or Management Consulting field Experience of Docker, Hadoop, PySpark, Apache or MS Azure Minimum 2 years NHS/Healthcare experience Disclosure and Barring Service Check This post is subject to the Rehabilitation of Offenders Act (Exceptions Order More ❯
mining, and predictive analytics. Proficient in Python, R, SQL, and common machine learning frameworks (e.g., TensorFlow, Scikit-learn). Data Infrastructure: Strong experience working with big data tools (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure, Google Cloud). Problem-Solving: Ability to design and implement innovative solutions to complex problems, leveraging data to drive business improvements. Communication More ❯
including Python, SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 5+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years More ❯
including Python, SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 5+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years More ❯
including Python, SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 5+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years More ❯
including Python, SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ years of hands-on experience building and tuning data pipelines in PySpark 4+ year experience working on real-time data and More ❯