often (in days) to receive an alert: Create Alert Supermicro is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are amongst the fastest growing company among the Silicon Valley Top 50 technology firms. Our unprecedented global More ❯
REMOTE- TEAMS Interview- LOCALS DESIRED - Previous proven Govt. client experience is mandatory Title : Cloudera Data Engineer-65050 Location : Lincoln, NE Duration : 6 + Months Job Type : C Description : Job Summary We are seeking a Cloudera Data Engineer to support the More ❯
or Tampa, FL (Hybrid)Duration: 6 months (possibility of extension)Implementation Partner: InfosysEnd Client: To be disclosedJD: Minimum Years of Experience: 8+ Years Mandatory Skills: Java Spark Big Data Hadoop Shell Scripting More ❯
CA & unanticipated sites thruout US. REQ: Bachelors Comp. Sci. or related field with exp. & some without exp. Positions available for IT professionals w/any combination of following technologies: Hadoop, Cassandra, Spring Boot, Microservices, Spring Cloud, RESTful API, React, Junit, PowerMock, Mockito, MySQL, Oracle, MongoDB, WebService, SOA, Agile methodology, JSP, Servlets, JavaScript, Java, Java11, Spring, Cosmos, Kubernetes, Jenkins. More ❯
Reston, Virginia, United States Hybrid/Remote Options
ALTA IT Services
Job Title: Lead Cloudera AWS Support Engineer/Hadoop Administrator Location: Reston, VA (hybrid, onsite once a month) Type: 6 month contract with possibility of extension Compensation: $87/HR - $89/HR depending on experience PURPOSE: Performs complex analysis, design, development, automated unit and integration testing, and debugging of computer software ranging from operating system architecture integration and … alerts if necessary. Experience on the user incidents who have issues with the Cloudera Services and assisting them accordingly. Write the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions. Monitoring the health of all the services running in the production cluster using the Cloudera Manager. Performing/… quality, training, and documentation efforts. Move data and use YARN to allocate resources and schedule jobs. Manage job workflows with Oozie and Hue. Implement comprehensive security policies across the Hadoop cluster using Ranger. Configure and manage Cloudera Data Science Workbench using Cloudera Manager. Troubleshoot potential issues with Kerberos, TLS/SSL, Models, and Experiments, as well as other workload More ❯
pre-systems Model and map scalable ELT/ETL flows (batch and streaming) Interface between Data Analysts and Data Scientists Enrich data and load into big data environment (ApacheHadoop on Cloudera) Profil echnical education (Computer Science HTL, Computer Science Degree, Data Science etc.) Experience in data modeling (relational databases and ApacheHadoop) Know-how of core technologies … such as SQL (MS SQL Server), ApacheHadoop (Kafka, NiFi, Flink, Scala and/or Java, Python) and Linux Showing interest in working with high-frequency data processing in a near-realtime environment Wir bieten Permanent position in a renowned IT company in Vienna You offer more than required? Perfect, we do too! Dependent on your qualifications and experience More ❯
years Education: Bachelors in Science (IT/Computer Science/Engineer) Employment Type: Full-Time UK based candidates only. Role Overview: We are seeking an experienced Scala/Hadoop/Big Data Developer . The role will focus on building and enhancing data-driven solutions, working within a fast-paced financial services environment. Key Responsibilities: Design, develop, and maintain … applications using Scala, Python, Hadoop and Java . Work with Big Data technologies , including Spark, Hive (nice to have). Collaborate with cross-functional teams to deliver scalable, high-performance solutions. Participate in code reviews, testing, and performance optimization. Ensure best practices in coding, design, and architecture. Skills & Experience Required: 2-6 years of software development experience. Strong hands More ❯
Position: Senior Hadoop Developer with Java Location: New York/Alpharetta, GA (Locals or near by only who can do in person with client) Duration: 12 Months Required Skills:Programming Languages: Strong proficiency in Python, Java, and SQL. Big Data Frameworks: Deep understanding of Hadoop ecosystem (HDFS, MapReduce, Hive, Spark Cloud Data Warehousing: Expertise in Snowflake architecture, data More ❯
Configuring and tuning Cloudera services for performance and security. Ensure secure, scalable, and cost-effective deployments on cloud platforms (AWS, Azure, or GCP). An understanding of all the Hadoop daemons along with their roles and responsibilities in the cluster. Should be able to troubleshoot issues in Cloudera services and fix those. Adding and removing nodes in the cluster. … Cloudera cloud data platform (CDP) and CDP Services and Big data knowledge. Proficiency in Terraform for infrastructure as code (IaC). Strong hands-on experience with Cloudera CDP and Hadoop ecosystem (Hive, Impala, HDFS, etc.) Experience with GitHub Actions or similar CI/CD tools (e.g., Jenkins, GitLab CI). Solid scripting skills in Shell and Python. Extensive experience More ❯
Configuring and tuning Cloudera services for performance and security. Ensure secure, scalable, and cost-effective deployments on cloud platforms (AWS, Azure, or GCP). An understanding of all the Hadoop daemons along with their roles and responsibilities in the cluster. Should be able to troubleshoot issues in Cloudera services and fix those. Adding and removing nodes in the cluster. … Cloudera cloud data platform (CDP) and CDP Services and Big data knowledge. Proficiency in Terraform for infrastructure as code (IaC). Strong hands-on experience with Cloudera CDP and Hadoop ecosystem (Hive, Impala, HDFS, etc.) Experience with GitHub Actions or similar CI/CD tools (e.g., Jenkins, GitLab CI). Solid scripting skills in Shell and Python. Extensive experience More ❯
Contract Opportunity: Data Analyst - Hadoop/SQL – 6-Month Inside IR35 📍 Location: Leicester or Milton Keynes (Hybrid – 3 Days in Office per Week) 📅 Duration: 6 Months 💼 IR35 Status: Inside IR35 💰 Rate: Flexible, Depending on Experience We’re looking for an experienced Data Analyst to join a project-focused team working on the development and implementation of a robust data … model for banking data. The successful candidates requires from knowledge of Hadoop and SQL. Role Purpose: You’ll collaborate with a small technical team and liaise with IT to ensure the data model is effectively productionised. The ideal candidate will have strong technical data analysis skills, experience with big data technologies, and the ability to communicate complex insights clearly … banking-focused data model. Liaise with IT teams to transition data models into production environments. Conduct data mining and exploratory data analysis to support model development. Apply strong SQL, Hadoop, and cloud-based data processing skills to manage and analyse large datasets. Support the design and structure of data models, with a working understanding of data modelling principles. Present More ❯
cloud-native tooling. Tech Stack Programming: Python (Java familiarity is a plus). AWS: S3, Kinesis, Glue, Lambda, Step Functions, SageMaker, and more. On-Prem: Managed Kubernetes Platform and Hadoop ecosystem. Why This Role is Dfferent Direct Impact: Build AI tools that traders and quants use daily to optimize strategies. Creative Freedom: Open collaboration and the chance to bring … paced front-office environment in either engineering or analytical roles. Passion for being hands-on and contributing to a collaborative engineering culture. Nice to Have Experience with on-prem Hadoop and Kubernetes. Familiarity with AWS cost management and optimization tools. Knowledge of front-office developer workflows in financial services. More ❯
cloud-native tooling. Tech Stack Programming: Python (Java familiarity is a plus). AWS: S3, Kinesis, Glue, Lambda, Step Functions, SageMaker, and more. On-Prem: Managed Kubernetes Platform and Hadoop ecosystem. Why This Role is Dfferent Direct Impact: Build AI tools that traders and quants use daily to optimize strategies. Creative Freedom: Open collaboration and the chance to bring … paced front-office environment in either engineering or analytical roles. Passion for being hands-on and contributing to a collaborative engineering culture. Nice to Have Experience with on-prem Hadoop and Kubernetes. Familiarity with AWS cost management and optimization tools. Knowledge of front-office developer workflows in financial services. More ❯
Glasgow, Scotland, United Kingdom Hybrid/Remote Options
Caspian One
and cloud-native tooling Tech Stack Programming: Python (Java familiarity is a plus) AWS: S3, Kinesis, Glue, Lambda, Step Functions, SageMaker, and more On-Prem: Managed Kubernetes Platform and Hadoop ecosystem Why This Role is Different Direct Impact: Build AI tools that traders and quants use daily to optimize strategies Creative Freedom: Open collaboration and the chance to bring … paced front-office environment in either engineering or analytical roles Passion for being hands-on and contributing to a collaborative engineering culture Nice to Have Experience with on-prem Hadoop and Kubernetes Familiarity with AWS cost management and optimization tools Knowledge of front-office developer workflows in financial services More ❯
glasgow, central scotland, united kingdom Hybrid/Remote Options
Caspian One
and cloud-native tooling Tech Stack Programming: Python (Java familiarity is a plus) AWS: S3, Kinesis, Glue, Lambda, Step Functions, SageMaker, and more On-Prem: Managed Kubernetes Platform and Hadoop ecosystem Why This Role is Different Direct Impact: Build AI tools that traders and quants use daily to optimize strategies Creative Freedom: Open collaboration and the chance to bring … paced front-office environment in either engineering or analytical roles Passion for being hands-on and contributing to a collaborative engineering culture Nice to Have Experience with on-prem Hadoop and Kubernetes Familiarity with AWS cost management and optimization tools Knowledge of front-office developer workflows in financial services More ❯
Job Title: Hadoop/ETL Developer Location: Charlotte, NC Work Arrangement: Onsite Client Industry: Banking Duration: 12 -18 months Contract Schedule: Monday to Friday Must Haves: 5+ years of professional experience in data?engineering, ETL development or Hadoop development 3+ years working with Hadoop ecosystem (HDFS, MapReduce, Hive, Spark) 3+ years of Informatica PowerCenter (or similar ETL More ❯
Glasgow, Scotland, United Kingdom Hybrid/Remote Options
Caspian One
and cloud-native tooling Tech Stack Programming: Python (Java familiarity is a plus) AWS: S3, Kinesis, Glue, Lambda, Step Functions, SageMaker, and more On-Prem: Managed Kubernetes Platform and Hadoop ecosystem Why This Role is Different Direct Impact: Build AI tools that traders and quants use daily to optimize strategies Creative Freedom: Open collaboration and the chance to bring … ECS, IAM, KMS, API Gateway, Step Functions, MSK, CloudFormation) Passion for being hands-on while enabling and growing a small engineering team Nice to Have Experience with on-prem Hadoop and Kubernetes Familiarity with AWS cost management and optimisation tools Knowledge of front-office developer workflows in financial services More ❯
cloud-native tooling. Tech Stack Programming: Python (Java familiarity is a plus). AWS: S3, Kinesis, Glue, Lambda, Step Functions, SageMaker, and more. On-Prem: Managed Kubernetes Platform and Hadoop ecosystem. Why This Role is Different Direct Impact: Build AI tools that traders and quants use daily to optimize strategies. Creative Freedom: Open collaboration and the chance to bring … IAM, KMS, API Gateway, Step Functions, MSK, CloudFormation). Passion for being hands-on while enabling and growing a small engineering team. Nice to Have Experience with on-prem Hadoop and Kubernetes. Familiarity with AWS cost management and optimisation tools. Knowledge of front-office developer workflows in financial services. More ❯
cloud-native tooling. Tech Stack Programming: Python (Java familiarity is a plus). AWS: S3, Kinesis, Glue, Lambda, Step Functions, SageMaker, and more. On-Prem: Managed Kubernetes Platform and Hadoop ecosystem. Why This Role is Different Direct Impact: Build AI tools that traders and quants use daily to optimize strategies. Creative Freedom: Open collaboration and the chance to bring … IAM, KMS, API Gateway, Step Functions, MSK, CloudFormation). Passion for being hands-on while enabling and growing a small engineering team. Nice to Have Experience with on-prem Hadoop and Kubernetes. Familiarity with AWS cost management and optimisation tools. Knowledge of front-office developer workflows in financial services. More ❯
Data Solutions in Mission-Critical areas. WE NEED THE BIG DATA ENGINEER TO HAVE.... Current DV clearance - Standard or Enhanced Must have experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience with Palantir Foundry is preferred but not essential Experience working in an Agile Scrum environment Experience in design, development, test and integration of software IT …/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH More ❯
us the software solutions of tomorrow! Learn more here: ADMIRAL Technologies - A clear victory for your future! Aufgaben Administrate, monitor and optimize our Big Data environments based on ApacheHadoop (AWS Cloud) Manage and maintain services like Kafka, Flink, NiFi, DynamoDB and Iceberg Tables IaaC deployment via Terraform Plan and execute updates and upgrades Advise our Data Engineers and … Data Scientists on the selection of Hadoop services for business use cases 3rd level support through troubleshooting and error analysis Support workload optimization (eg. query optimization) Evaluation and implementation of … Cloud Native Services Profil Technical education (computer science studies, etc.) Experience in system administration of Linux systems (RedHat) Expertise in building and operating Big Data environments based on ApacheHadoop clusters Interest in continuously learning and improving one's skillset Wir bieten State-of-the-Art Technologies We understand the usage of State-of-the-Art-Technologies as part More ❯
Title: Databricks Data Engineer - Hadoop Location: Boston, MA (On-site) Duration: 6 months (possibility of extension) Implementation Partner: Infosys End Client: To be disclosed JD: Minimum Years of Experience: 4-6 years Hands-on experience in Databricks, Azure, and Scala Experience with HadoopMore ❯
Title: Senior Data Engineer - Databricks & Hadoop Location: Boston, MA (On-site) Duration: 6 months (possibility of extension) Implementation Partner: Infosys End Client: To be disclosed JD: Minimum Years of Experience: 7+ years Advanced experience in Databricks, Azure, and Scala Strong expertise in HadoopMore ❯