Dunn Loring, Virginia, United States Hybrid / WFH Options
River Hawk Consulting LLC
/metadata structures, data flows, and models Experience creating visualizations with Tableau or comparable programs Demonstrated experience writing and modifying SQL Demonstrated experience with ApacheHive, Apache Spark, and HDFS or S3 Demonstrated expertise developing software using Neo4j, Python, or Java Knowledge of development tools such as More ❯
with NoSQL data persistence - DynamoDB, MongoDB, etc. DevOps mindset - knowing how to automate development and operational tasks Big data or data science background: ML, Apache Spark, ApacheHive, machine learning #J-18808-Ljbffr More ❯
Working experience in Palantir Foundry platform is must • Experience designing and implementing data analytics solutions on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred). • Proven track record of understanding and transforming customer requirements into a best-fit design and architecture. • Demonstrated experience in end More ❯
to streamline data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera – Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, or a More ❯
and migration of these data warehouses to modern cloud data platforms. Deep understanding and hands-on experience with big data technologies like Hadoop, HDFS, Hive, Spark and cloud data platform services. Proven track record of designing and implementing large-scale data architectures in complex environments. CICD/DevOps experience More ❯
given constraints • Excellent diplomacy and communication skills with both clients and technical staff • Desired Skills • Proficiency in Python and Scala • Experience using Spark and Hive • Experience with Qlik or other data visualization administration • Experience completing Databricks development and/or administrative tasks • Familiarity with some of these tools: DB2 More ❯
AWS platforms Experience in writing efficient SQL's, implementing complex ETL transformations on big data platform. Experience in a Big Data technologies (Spark, Impala, Hive, Redshift, Kafka, etc.) Experience in data quality testing; adept at writing test cases and scripts, presenting and resolving data issues Experience with Databricks, Snowflake More ❯
Experience with AWS ecosystem or other big data technologies such as EC2, S3, Redshift, Batch, AppFlow AWS: EC2, S3, Lambda, DynamoDB, Cassandra, SQL, Hadoop, Hive, HDFS, Spark, other big data technologies Understand, Analyze, design, develop, as well as implement RESTful services and APIs About Us: SPECTRAFORCE is one of More ❯
analytics, or data science, with the ability to work effectively with various data types and sources. Experience using big data technologies (e.g. Hadoop, Spark, Hive) and database management systems (e.g. SQL and NoSQL). Graph Database Expertise : Deep understanding of graph database concepts, data modeling, and query languages (e.g. More ❯
Azure Machine Learning Studio. Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: Apache NiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data … Factory, Talend, and Apache Airflow. AI & Machine Learning: Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, and MXNet. AI Services: AWS SageMaker, Azure Machine Learning, Google AI Platform. DevOps & Infrastructure as Code: Containerization: Docker and Kubernetes. Infrastructure Automation: Terraform, Ansible, and AWS CloudFormation. API & Microservices: API Development: RESTful API design and More ❯
pipelines Proficiency in SQL Experience with scripting languages like Python or KornShell Unix experience Troubleshooting data and infrastructure issues Preferred Qualifications Experience with Hadoop, Hive, Spark, EMR Experience with ETL tools like Informatica, ODI, SSIS, BODI, DataStage Knowledge of distributed storage and computing systems Experience with reporting and analytics More ❯
London, England, United Kingdom Hybrid / WFH Options
Solirius Reply
TensorFlow, XGBoost, PyTorch). Strong foundation in statistics, probability, and hypothesis testing. Experience with cloud platforms (AWS, GCP, Azure) and big data tools (Spark, Hive, Databricks, etc.) is a plus. Excellent communication and storytelling skills with the ability to explain complex concepts to non-technical stakeholders. Proven track record More ❯
Milton Keynes, England, United Kingdom Hybrid / WFH Options
Santander
with team members, stakeholders and end users conveying technical concepts in a comprehensible manner Skills across the following data competencies: SQL (AWS Athena/Hive/Snowflake) Hadoop/EMR/Spark/Scala Data structures (tables, views, stored procedures) Data Modelling - star/snowflake Schemas, efficient storage, normalisation More ❯
a week. Flexibility is essential to adapt to schedule changes if needed. Preferred Requirements Experience with big data technologies like: Hadoop, Spark, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers EKS, Diode, CI/CD, and Terraform are a plus Work could possibly require some on More ❯
MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results More ❯
rapid prototyping and disciplined software development processes. Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.), data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras). Demonstrated ability to work on multi-disciplinary teams with diverse skillsets. More ❯
a seamless UX to simplify skin cancer risk assessments and build user trust. SKINSCREENER Improving experience and making life simpler for home automation customers. HIVE Developing an intuitive online tool for exterior landscaping design. MEYER Bringing on-demand to the UK’s favourite TV listing and review platform. RADIO More ❯
the big 3 cloud ML stacks (AWS, Azure, GCP). Hands-on experience with open-source ETL, and data pipeline orchestration tools such as Apache Airflow and Nifi. Experience with large scale/Big Data technologies, such as Hadoop, Spark, Hive, Impala, PrestoDb, Kafka. Experience with workflow orchestration … tools like Apache Airflow. Experience with containerisation using Docker and deployment on Kubernetes. Experience with NoSQL and graph databases. Unix server administration and shell scripting experience. Experience in building scalable data pipelines for highly unstructured data. Experience in building DWH and data lakes architectures. Experience in working in cross More ❯
Cloud Data Lake activities. The candidate should have industry experience (preferably in Financial Services) in navigating enterprise Cloud applications using distributed computing frameworks as Apache Spark, Hadoop, Hive. Working knowledgeoptimizing database performance, scalability, ensuring data security and compliance. Education & Preferred Qualifications Bachelor’s/Master's Degree in a More ❯
London, England, United Kingdom Hybrid / WFH Options
Enigma
field. • 3+ experience as a Software Engineer with a strong focus on Data work. • Strong proficiency in leading big data technologies (e.g., Hadoop, Spark, Hive). • Familiarity with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). • Strong problem-solving skills and attention to detail. • Excellent communication and collaboration skills. More ❯
to time. Flexibility is essential to accommodate any changes in the schedule. Preferred Requirements Experience with big data technologies like: Hadoop, Spark, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers EKS, Diode, CI/CD, and Terraform are a plus Work could possibly require some on More ❯
eg. Python, R, Scala, etc. (Python preferred) Strong proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies eg. pySpark, Hive, etc. Experienced working with structured and unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and how More ❯
MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Amazon is an equal opportunity employer and does not discriminate More ❯
languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and More ❯
Random Forest, graph models, Bayesian inference, NLP, Computer Vision, neural networks. (Required) 4-7 years of Experience with big data architecture and pipeline, Hadoop, Hive, Spark, Kafka, etc. (Required) 4-7 years of Experience in data visualization (Required) 2-4 years Management consulting, investment banking, or corporate strategy (Required More ❯