experience in Big Data implementation projects Experience in the definition of Big Data architecture with different tools and environments: Cloud (AWS, Azure and GCP), Cloudera, No-sql databases (Cassandra, Mongo DB), ELK, Kafka, Snowflake, etc. Past experience in Data Engineering and data quality tools (Informatica, Talend, etc.) Previous involvement in More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
InterEx Group
experience in Big Data implementation projects Experience in the definition of Big Data architecture with different tools and environments: Cloud (AWS, Azure and GCP), Cloudera, No-sql databases (Cassandra, Mongo DB), ELK, Kafka, Snowflake, etc. Past experience in Data Engineering and data quality tools (Informatica, Talend, etc.) Previous involvement in More ❯
scikit-learn). Knowledge of software engineering practices (coding practices to DS, unit testing, version control, code review). Experience with Hadoop (especially the Cloudera and Hortonworks distributions), other NoSQL (especially Neo4j and Elastic), and streaming technologies (especially Spark Streaming). Deep understanding of data manipulation/wrangling techniques. Experience More ❯
a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred qualifications, capabilities, and skills Knowledge of AWS Knowledge of Databricks Understanding of Cloudera Hadoop, Spark, HDFS, HBase, Hive. Understanding of Maven or Gradle About the Team J.P. Morgan is a global leader in financial services, providing strategic advice More ❯
cloud platforms, preferably IBM Cloud. Contributions to open-source projects or personal projects demonstrating Big Data and Java development skills. Relevant certifications such as Cloudera Certified Associate (CCA) or Hortonworks Certified Developer (HCD) are considered a plus. By joining IBM's Public Sector team as a Big Data Java Developer More ❯
and demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Big Data Eco-Systems, Cloudera/Hortonworks, AWS EMR, GCP DataProc or GCP Cloud Data Fusion. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience More ❯
deploy mission critical and highly differentiated Data & AI solutions for companies looking to migrate their data stack from legacy data platforms such as Teradata, Cloudera, Datastage and Informatica to modern data platforms like Databricks. Over the last few years, we have become a leading Databricks partner, building a strong and More ❯
deploy mission critical and highly differentiated Data & AI solutions for companies looking to migrate their data stack from legacy data platforms such as Teradata, Cloudera, Datastage and Informatica to modern data platforms like Databricks. Over the last few years, we have become a leading Databricks partner, building a strong and More ❯
for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should have experience in Cloudera or similar distribution and should possess in depth knowledge of bigdata tech stack. Requirement: Experience of platform engineering along with application engineering (hands-on) Experience More ❯
for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should have experience in Cloudera or similar distribution and should possess in depth knowledge of bigdata tech stack. Requirement: Experience of platform engineering along with application engineering (hands-on) Experience More ❯
DV Clearance. WE NEED THE DATA ENGINEER TO HAVE…. Current DV clearance MOD or Enhanced Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test More ❯
DV Clearance. WE NEED THE DATA ENGINEER TO HAVE…. Current DV clearance MOD or Enhanced Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test More ❯
best practices to streamline data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera - Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, or … years of experience in data engineering, including working with AWS services. Proficiency in AWS services like S3, Glue, Redshift, Lambda, and EMR. Knowledge of Cloudera-based Hadoop is a plus. Strong ETL development skills and experience with data integration tools. Knowledge of data modeling, data warehousing, and data transformation techniques. More ❯
expertise in Terraform, Kubernetes, Shell/Powershell scripting, CI/CD pipelines (GitLab, Jenkins), Azure DevOps, IaC, and experience with big data platforms like Cloudera, Spark, and Azure Data Factory/DataBricks. Key Responsibilities: Implement and maintain Infrastructure as Code (IaC) using Terraform, Shell/Powershell scripting, and CI/… the Technical and Solution Architect teams to design the overall solution architecture for end-to-end data flows. Utilize big data technologies such as Cloudera, Hue, Hive, HDFS, and Spark for data processing and storage. Ensure smooth data management for marketing consent and master data management (MDM) systems. Key Skills … integration and delivery for streamlined development workflows. Azure Data Factory/DataBricks : Experience with these services is a plus for handling complex data processes. Cloudera (Hue, Hive, HDFS, Spark) : Experience with these big data tools is highly desirable for data processing. Azure DevOps, Vault : Core skills for working in Azure More ❯
expertise in Terraform, Kubernetes, Shell/Powershell scripting, CI/CD pipelines (GitLab, Jenkins), Azure DevOps, IaC, and experience with big data platforms like Cloudera, Spark, and Azure Data Factory/DataBricks. Key Responsibilities: Implement and maintain Infrastructure as Code (IaC) using Terraform, Shell/Powershell scripting, and CI/… the Technical and Solution Architect teams to design the overall solution architecture for end-to-end data flows. Utilize big data technologies such as Cloudera, Hue, Hive, HDFS, and Spark for data processing and storage. Ensure smooth data management for marketing consent and master data management (MDM) systems. Key Skills … integration and delivery for streamlined development workflows. Azure Data Factory/DataBricks : Experience with these services is a plus for handling complex data processes. Cloudera (Hue, Hive, HDFS, Spark) : Experience with these big data tools is highly desirable for data processing. Azure DevOps, Vault : Core skills for working in Azure More ❯
a DV Clearance. WE NEED THE DATA ENGINEER TO HAVE Current DV clearance MOD or Enhanced Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test …/DV CLEARANCE/DEVELOPED VETTING/DEEP VETTING/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH/LEAD BIG DATA ENGINEER/LEAD BIG DATA DEVELOPER More ❯