implementing code that interacts with Cloud Distributed Coordination Frameworks. Two (2) years of experience with programs utilizing Big Data Cloud technologies CERTIFICATIONS: Cloudera Certified Hadoop Developer, CompTIA Cloud Plus (Cloud+), AWS, and Microsoft Azure foundational or fundamental level certification may be substituted for one (1) year of Cloud experience. More ❯
Software Design or Development, Content Distribution/CDN, Scripting/Automation, Database Architecture, Cloud Architecture, Cloud Migrations, IP Networking, IT Security, Big Data/Hadoop/Spark, Operations Management, Service Oriented Architecture etc. Experience in a 24x7 operational services or support environment. Experience with AWS Cloud services and/ More ❯
in emerging and traditional technologies such as: node.js, Java, AngularJS, React, Python, REST, JSON, XML, Ruby, HTML/HTML5, CSS, NoSQL databases, relational databases, Hadoop, Chef, Maven, iOS, Android, and AWS/Cloud Infrastructure to name a few. You will: - Work with product owners to understand desired application capabilities More ❯
in emerging and traditional technologies such as: node.js, Java, AngularJS, React, Python, REST, JSON, XML, Ruby, HTML/HTML5, CSS, NoSQL databases, relational databases, Hadoop, Chef, Maven, iOS, Android, and AWS/Cloud Infrastructure to name a few. You will: - Work with product owners to understand desired application capabilities More ❯
Fairfax, Virginia, United States Hybrid / WFH Options
CGI
your insights create business impact. Experience with cloud-based ETL solutions (e.g., AWS Glue, Azure Data Factory). Knowledge of Big Data technologies (e.g., Hadoop, Spark). Certification in relevant ETL or data integration technologies. Bachelors Degree in data science, mathematics, statistics, economics, computer science, engineering, or other related … opportunities and tuition assistance Wellness and Well-being programs Due to the nature of this government contract, US Citizenship is required. Skills: Adobe Spark Hadoop Ecosystem (HDFS) What you can expect from us: Together, as owners, lets turn meaningful insights into action. Life at CGI is rooted in ownership More ❯
and optimize large-scale data pipelines . Create and maintain ETL workflows for processing structured and unstructured data. Implement solutions using Big Data frameworks (Hadoop, Spark, Hive, etc.). Develop scalable and high-performance code in Python, Spark/Scala. Collaborate with data scientists and analysts to optimize workflows. … infrastructure using GitHub Workflows . Mandatory Requirements ️ 5+ years of experience as a Big Data Developer. ️ Strong proficiency in Big Data frameworks such as Hadoop, Spark, Hive. ️ Expertise in Python or Scala and advanced SQL . ️ Solid understanding of distributed computing and cloud architectures (AWS, Azure, GCP) . ️ Strong More ❯
Work with data and analytics experts to strive for greater functionality in our data systems. Design and architect solutions with Big Data technologies (e.g Hadoop, Hive, Spark, Kafka) Design and implement systems that run at scale leveraging containerized deployments Design, build, and scale data pipelines across a variety of … Informatics, Information Systems, or another quantitative field Minimum 5 years of experience in a Data Engineer role Required Skills: Experience with big data tools: Hadoop, Spark, etc. Experience with relational SQL and NoSQL databases, including Postgres Experience with AWS cloud or remote services: EC2, EMR, RDS, Redshift Experience with More ❯
We are a leading multi-strategy systematic hedge fund based in London, leveraging advanced technology and data to drive our trading strategies. Our team includes top quantitative researchers, data scientists, and engineers, all collaborating to develop innovative solutions. We are More ❯
We are a leading multi-strategy systematic hedge fund based in London, leveraging advanced technology and data to drive our trading strategies. Our team includes top quantitative researchers, data scientists, and engineers, all collaborating to develop innovative solutions. We are More ❯
experience - Candidate must have good exposure to Microservice based architecture - Experience with database Oracle and SQL knowledge - Preferable to Have - exposure to kafka/Hadoop/React JS/ElasticSearch/Spark - Good to Have - have exposure to Fabric/Kubernetes/Dockers/helm More ❯
A very exciting opportunity! The following skills/experience is required: Strong Data Architect background Experience in Data Technologies to include: Finbourne LUSID, Snowflake, Hadoop, Spark. Experience in Cloud Platforms: AWS, Azure or GCP. Previously worked in Financial Services: Understanding of data requirements for equities, fixed income, private assets More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hunter Bond
A very exciting opportunity! The following skills/experience is required: Strong Data Architect background Experience in Data Technologies to include: Finbourne LUSID, Snowflake, Hadoop, Spark. Experience in Cloud Platforms: AWS, Azure or GCP. Previously worked in Financial Services: Understanding of data requirements for equities, fixed income, private assets More ❯
role within a small data science team. Client is looking for hands on experience developing solutions for complex data science problems using Python, R, Hadoop, and Greenplum (or other Massively Parallel Processing solutions). REQUIRED SKILLS: Bachelor's Degree in a quantitative or technical field of study, such as More ❯
developing and delivering in a Linux environment. - Experience with relational databases- PostgreSQL, MySQL or document databases- Elasticsearh, Solr or big data technologies such as Hadoop, Hive, or Spark. - Experience with Git an Git-based workflows Clearance requirement: TS/SCI w/FS Poly REQUIRED for consideration. Location: Chantilly More ❯
Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting More ❯
engineering teams - ideally SRE or Production Engineering Experience with large scale distributed systems Deep understanding and experience in one or more of the following: Hadoop, Spark, Flink, Kubernetes, AWS Experience working and leading geographically distributed teams and implementing high level projects and migrations Preferred Qualifications BS degree in computer More ❯
of Java programming; can independently prototype solutions to problems. Experience with Recommender System, NLP and Machine Learning libraries. Experience with big data technologies (e.g. Hadoop, MapReduce, Cascading, Scalding, Scala) is desirable but not required. Unix skills. Experience with start-up and R&D environments. Strong presentation skills in communicating More ❯
Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace More ❯
experience with Data Modelling and SQL ; • Minimum 5 years of professional experience with data visualization (Power BI, Tableau, Qlik ), data handling (Talend Open Studio, Hadoop, FME) , and data modelling (metadata management) . • Preferably, some experience in digital-ready policymaking and AI. More ❯
team. What you will need for this job? Experience with the agile software lifecycle Experience in streaming and/or batch analytics (Spark, MapReduce, Hadoop) Experience in distributed databases, NoSQL databases, full text-search engines (MongoDB, ElasticSearch) Experience in designing enterprise APIs Experience in RESTful web services Experience in More ❯
Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting More ❯
tools including kali Linux Ability to automate processes through scripting Knowledge of network analysis tools and operating system tools. Experience with the following: SQL, Hadoop, or other DBMS Python or scripting language Mobile technologies (cellular, wi-fi, Bluetooth, etc) Creating automated data visualizations Work Environment Chantilly, VA Short term More ❯
Merge, Sort, Lookup etc. Experience with various Abinitio parallelism techniques, Abinitio Graphs using Data, Pipeline Component parallelisms. Working/Basic knowledge in Bigdata/Hadoop Eco system. Experience and expertise on SQL and Unix shell scripting. - Knowledge on Abinitio air commands and m commands. Working experience in Abinitio BRE More ❯
Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace More ❯