utilizing AGILE development methodologies and Application Lifecycle Management. Experience working with programming languages such as Java, C++, JSON, PHP, Perl, Python, Ruby, Pig/Hive, and/or Elixir. SAIC accepts applications on an ongoing basis and there is no deadline. Covid Policy: SAIC does not require COVID More ❯
team of developers in credit risk regulatory and compliance data delivery projects is also essential. Additional relevant technical skills such as SQL skills in Hive, Impala, and Teradata, as well as experience in AWS or other cloud platforms, are highly valued. This role will be based in our Northampton More ❯
Experience utilizing AGILE development methodologies and Application Lifecycle Experience working with programming languages such as Java, C++, JSON, PHP, Perl, Python, Ruby, Pig/Hive, and/or Elixir. Overview We are seeking a Cloud Engineering Lead to join our team supporting ODNI HALO. TekSynap is a fast-growing More ❯
Herndon, Virginia, United States Hybrid / WFH Options
Maxar Technologies
services working with open-source resources in a government computing environment Maintaining backend GIS technologies ICD 503 Big data technologies such as Accumulo , Spark, Hive, Hadoop , or ElasticSearch F amiliarity with : hybrid cloud/on-prem architecture, AWS, C2S, and OpenStack . concepts such as Data visualization; Data management More ❯
Herndon, Virginia, United States Hybrid / WFH Options
Maxar Technologies Holdings Inc
services working with open-source resources in a government computing environment Maintaining backend GIS technologies ICD 503 Big data technologies such as Accumulo , Spark, Hive, Hadoop , or ElasticSearch F amiliarity with : hybrid cloud/on-prem architecture, AWS, C2S, and OpenStack . concepts such as Data visualization; Data management More ❯
services working with open-source resources in a government computing environment Maintaining backend GIS technologies ICD 503 Big data technologies such as Accumulo , Spark, Hive, Hadoop , or ElasticSearch F amiliarity with : hybrid cloud/on-prem architecture, AWS, C2S, and OpenStack . concepts such as Data visualization; Data management More ❯
services working with open-source resources in a government computing environment Maintaining backend GIS technologies ICD 503 Big data technologies such as Accumulo , Spark, Hive, Hadoop , or ElasticSearch F amiliarity with : hybrid cloud/on-prem architecture, AWS, C2S, and OpenStack . concepts such as Data visualization; Data management More ❯
Centre of Excellence. Skills, knowledge and expertise: Deep expertise in the Databricks platform, including Jobs and Workflows, Cluster Management, Catalog Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts. Proven experience working with big data technologies, i.e., Databricks and Apache Spark. Proven experience More ❯
mission critical data pipelines and ETL systems. 5+ years of hands-on experience with big data technology, systems and tools such as AWS, Hadoop, Hive, and Snowflake Expertise with common Software Engineering languages such as Python, Scala, Java, SQL and a proven ability to learn new programming languages Experience … visualizations skills to convey information and results clearly Experience with DevOps tools such as Docker, Kubernetes, Jenkins, etc. Experience with event messaging frameworks like Apache Kafka The hiring range for this position in Santa Monica, California is $136,038 to $182,490 per year, in Glendale, California is More ❯
as Hbase, CloudBase/Acumulo, Big Table, etc.; Shall have demonstrated work experience with the Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc.; Shall have demonstrated work experience with the Hadoop Distributed File System (HDFS); Shall have demonstrated work experience with Serialization such as JSON More ❯
/Acumulo, and Big Table; Convert existing algorithms or develop new algorithms to utilize the Map Reduce programming model and technologies such as Hadoop, Hive, and Pig; Support operational systems utilizing the HDFS; Support the deployment of operational systems and applications in a cloud environment; Conduct scalability assessments of More ❯
responsibilities Architecture design and implementation of next generation data pipelines and BI solutions Manage AWS resources including EC2, RDS, Redshift, Kinesis, EMR, Lambda, Glue, Apache Airflow etc. Build and deliver high quality data architecture and pipelines to support business analyst, data scientists, and customer reporting needs. Interface with other … MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace More ❯
or five (5) years programming experience may be substituted for a bachelor's degree. Clearance:Must possess a TS/SCI with polygraph Hadoop, Hive, and/or Pig Within the last three (3) years, a minimum of one (1) year experience with the Hadoop Distributed File System (HDFS More ❯
engineering or in a similar discipline. Analyzing: Ability to make data-driven business suggestions by connecting relevant data points to clear conclusions - familiarity with Hive, Vertica, MySQL, BigQuery, Redshift. Automation: Familiarity with scripting in Python, working with APIs, and testing. Knowledge of working with n8n is a plus. Troubleshooting More ❯
Plant, Emerson Plantweb/AMS, GE/Meridum APM, Aveva, Bentley, and OSIsoft PI Familiarity with relevant technology, such as Big Data (Hadoop, Spark, Hive, BigQuery); Data Warehouses; Business Intelligence; and Machine Learning Savvy at helping customers create business cases with quantified ROI to justify new investments Experience with More ❯
large-scale machine-learning infrastructure for online recommendation, ads ranking, personalization, and search. You will work on Big Data technologies such as AWS, Spark, Hive, Lucene/SOLR, Elasticsearch, etc. You will drive appropriate technology choices for the business, lead continuous innovation, and shape the future of India Advertising. More ❯
for our internal customers to access and query the data hundreds of thousands of times per day, using Amazon Web Service's (AWS) Redshift, Hive, Spark. We build scalable solutions that grow with the Amazon business. BDT team is building an enterprise-wide Big Data Marketplace leveraging AWS technologies. More ❯
Tampa, Florida, United States Hybrid / WFH Options
LTIMindtree
libraries Good to have GenAI skillset Good knowledge and work experience in Unixcomm and Shellscripts etc Good to have experience on Pyspark Hadoop and Hive as well Expertise in software engineering principles such as design patterns code design testing and documentation Writing effective and scalable codes implementing robust and … in Python Programming Python Institute Certified Professional in Python Programming 1 Python Institute Certified Professional in Python Programming 2 Databricks Certified Associated Developer for Apache Spark Skills Mandatory Skills : Apache Spark, Big Data Hadoop Ecosystem, Data Architecture, Python More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Yelp USA
to the experimentation and development of new ad products at Yelp. Design, build, and maintain efficient data pipelines using large-scale processing tools like Apache Spark to transform ad-related data. Manage high-volume, real-time data streams using Apache Kafka and process them with frameworks like Apache Flink. Estimate timelines for projects, feature enhancements, and bug fixes. Work with large-scale data storage solutions, including Apache Cassandra and various data lake systems. Collaborate with cross-functional teams, including engineers, product managers and data scientists, to understand business requirements and translate them into effective system designs. … a proactive approach to identifying opportunities and recommending scalable, creative solutions. Exposure to some of the following technologies: Python, AWS Redshift, AWS Athena/Apache Presto, Big Data technologies (e.g S3, Hadoop, Hive, Spark, Flink, Kafka etc), NoSQL systems like Cassandra, DBT is nice to have. What you More ❯
scalable Big Data Store (NoSQL) such as Hbase, CloudBase/Acumulo, Big Table, etc. o Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc. o Hadoop Distributed File System (HDFS) o Serialization such as JSON and/or BSON • 4 years of SWE experience may be More ❯
forensics, log analysis) Experience interpreting information from multiple sources and working with data sets Knowledge with database tools/systems such as Hbase, SQL, Hive Query Language Preferred Qualifications Coding proficiency in Python, PHP, and/or C++, or similar high level languages About Meta Meta builds technologies that More ❯
Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13 billion. Job Description: ============= Spark - Must have Scala - Must Have Hive & SQL - Must Have Hadoop - Must Have Communication - Must Have Banking/Capital Markets Domain - Good to have Note: Candidate should know Scala/Python … Core) coding language. Pyspark profile will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming More ❯
Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13 billion. Job Description: ============= Spark - Must have Scala - Must Have Hive & SQL - Must Have Hadoop - Must Have Communication - Must Have Banking/Capital Markets Domain - Good to have Note: Candidate should know Scala/Python … Core) coding language. Pyspark profile will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming More ❯
basic scripts) Pydantic experience DESIRABLE SQL PySpark Delta Lake Bash (both CLI usage and scripting) Git Markdown Scala DESIRABLE Azure SQL Server as a HIVE Metastore DESIRABLE TECHNOLOGIES Azure Databricks Apache Spark Delta Tables Data processing with Python PowerBI (Integration/Data Ingestion) JIRA If you meet the More ❯
be a bonus.) - SQL - PySpark - Delta Lake - Bash (both CLI usage and scripting) - Git - Markdown - Scala (bonus, not compulsory) - Azure SQL Server as a HIVE Metastore (bonus) Technologies - Azure Databricks - Apache Spark - Delta Tables - Data processing with Python - PowerBI (Integration/Data Ingestion) - JIRA Due to the nature More ❯