Informatica and Talend to ensure high-quality data integration from diverse sources including AWS, Azure Data Lake, Hadoop, and Oracle databases. * Build and optimize data warehouses utilizing SQL Server, ApacheHive, Spark, and other big data frameworks to support advanced analytics and reporting needs. * Develop interactive dashboards and reports using Looker and other visualization tools to present insights … patterns, and anomalies that inform strategic decisions. * Maintain comprehensive documentation of data models, workflows, and technical specifications for transparency and future reference. * Stay current with emerging technologies such as Apache Spark, Hadoop ecosystem components, Azure Data Lake services, and linked data concepts to continuously improve our BI infrastructure. *Skills* * Strong proficiency in SQL Server, Oracle, and Big Data technologies … including Hadoop ecosystem components like Hive and Spark. * Hands-on experience with cloud platforms such as AWS and Azure Data Lake for scalable data storage solutions. * Expertise in ETL tools like Informatica and Talend for efficient data processing workflows. * Advanced knowledge of programming languages including Python, Java, VBA, Bash (Unix shell), and Shell Scripting for automation and analysis tasks. More ❯
Conduct>It, Express>It, Metadata Hub, and PDL. Hands-on experience with SQL , Unix/Linux shell scripting , and data warehouse concepts . Familiarity with big data ecosystems (Hadoop, Hive, Spark) and cloud platforms (AWS, Azure, GCP) is a plus. Proven ability to troubleshoot complex ETL jobs and resolve performance issues. Experience working with large-scale datasets and enterprise More ❯
Trino/Starburst Enterprise/Galaxy administration and CLI operations Container Orchestration : Proven track record with Kubernetes/OpenShift in production environments Big Data Ecosystem : Strong background in Hadoop, Hive, Spark, and cloud platforms (AWS/Azure/GCP) Systems Architecture : Understanding of distributed systems, high availability, and fault-tolerant design Security Protocols : Experience with LDAP, Active Directory, OAuth2 More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Computappoint
Trino/Starburst Enterprise/Galaxy administration and CLI operations Container Orchestration : Proven track record with Kubernetes/OpenShift in production environments Big Data Ecosystem : Strong background in Hadoop, Hive, Spark, and cloud platforms (AWS/Azure/GCP) Systems Architecture : Understanding of distributed systems, high availability, and fault-tolerant design Security Protocols : Experience with LDAP, Active Directory, OAuth2 More ❯
Job Description: Scala/Spark Good Big Data resource with the below Skillset: Java Big data technologies. Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. Consistently demonstrates clear and concise written and verbal communication A history of delivering against agreed objectives More ❯
Job Description: Scala/Spark Good Big Data resource with the below Skillset: Java Big data technologies. Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. Consistently demonstrates clear and concise written and verbal communication A history of delivering against agreed objectives More ❯
Hybrid) Type: TP (Inside IR35) Job Description: Good Big Data resource with the below Skillset: Java and Big data technologies, Scala/Spark Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. Consistently demonstrates clear and concise written and verbal communication A More ❯
City of London, London, United Kingdom Hybrid / WFH Options
E-Solutions
Hybrid) Type: TP (Inside IR35) Job Description: Good Big Data resource with the below Skillset: Java and Big data technologies, Scala/Spark Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. Consistently demonstrates clear and concise written and verbal communication A More ❯
london, south east england, united kingdom Hybrid / WFH Options
E-Solutions
Hybrid) Type: TP (Inside IR35) Job Description: Good Big Data resource with the below Skillset: Java and Big data technologies, Scala/Spark Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. Consistently demonstrates clear and concise written and verbal communication A More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
E-Solutions
Hybrid) Type: TP (Inside IR35) Job Description: Good Big Data resource with the below Skillset: Java and Big data technologies, Scala/Spark Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. Consistently demonstrates clear and concise written and verbal communication A More ❯
slough, south east england, united kingdom Hybrid / WFH Options
E-Solutions
Hybrid) Type: TP (Inside IR35) Job Description: Good Big Data resource with the below Skillset: Java and Big data technologies, Scala/Spark Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. Consistently demonstrates clear and concise written and verbal communication A More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Lorien
capacity. Provide Environment Management representation in daily scrums, working groups, and ad-hoc meetings. Required Skillsets: Strong skills and experience with data technologies such as IBM DB2, Oracle, MongoDB, Hive, Hadoop, SQL, Informatica, and similar tech stacks. Attention to detail and strong ability to work independently and navigate complex target end state architecture (Tessa). Strong knowledge and experience More ❯
Title: GCP -Data Engineer Location: Philadelphia PA (Can submit who are willing to relocate) GCP Data Engineer - GCP Dataflow and Apache Beam (Key skills) Primary Skills- PySpark, Spark, Python, Big Data, GCP, Apache Beam, Dataflow, Airflow, Kafka and BigQuery GFO, Google Analytics Javascript is Must Strong Experience with Dataflow and BigQuery A person should have leading the team … Platforms (preferably GCP) provided Big Data technologies Hands-on experience with real-time streaming processing as well as high volume batch processing, and skilled in Advanced SQL, GCP BigQuery, Apache Kafka, Data-Lakes, etc. Hands-on Experience in Big Data technologies - Hadoop, Hive, and Spark, and an enterprise-scale Customer Data Platform (CDP) Experience in at least one More ❯
converting research studies into tangible real-world changes Knowledge of AWS platforms such as S3, Glue, Athena, Sagemaker Experience with big data technologies such as AWS, Hadoop, Spark, Pig, Hive etc. PhD in Industrial/Organizational Psychology or related field Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. More ❯
technical data & marketing solution experience 5+ years of in-depth understanding of database platforms including relational and non-relational data engines (MS SQL, MySQL, PostgreSQL, MongoDB, MariaDB, Hadoop, Snowflake, Hive, Spark, Big Query, Redshift, Data Warehouse, and similar) 5+ years of experience with data processing (ETL) tools & methodologies (Informatica, Talend, Spark, Pentaho, SSIS, Unifi, SnapLogic and similar) 3+ years … digital marketing analytics Experience and knowledge with marketing cloud solutions Experience in programming languages (python, Java, or Bash scripting) Experience with Big Data technologies (i.e. Hadoop, Spark, Redshift, Snowflake, Hive, Pig, etc.) Experience with Martech/Adtech tools and how to integrate technologies into the data management solution. (Adobe, Salesforce, Oracle, Google, and similar) Experience in advanced analytics\machine More ❯
technical data & marketing solution experience 5+ years of in-depth understanding of database platforms including relational and non-relational data engines (MS SQL, MySQL, PostgreSQL, MongoDB, MariaDB, Hadoop, Snowflake, Hive, Spark, Big Query, Redshift, Data Warehouse, and similar) 5+ years of experience with data processing (ETL) tools & methodologies (Informatica, Talend, Spark, Pentaho, SSIS, Unifi, SnapLogic and similar) 3+ years … digital marketing analytics Experience and knowledge with marketing cloud solutions Experience in programming languages (python, Java, or Bash scripting) Experience with Big Data technologies (i.e. Hadoop, Spark, Redshift, Snowflake, Hive, Pig, etc.) Experience with Martech/Adtech tools and how to integrate technologies into the data management solution. (Adobe, Salesforce, Oracle, Google, and similar) Experience in advanced analytics\machine More ❯
technical data & marketing solution experience 5+ years of in-depth understanding of database platforms including relational and non-relational data engines (MS SQL, MySQL, PostgreSQL, MongoDB, MariaDB, Hadoop, Snowflake, Hive, Spark, Big Query, Redshift, Data Warehouse, and similar) 5+ years of experience with data processing (ETL) tools & methodologies (Informatica, Talend, Spark, Pentaho, SSIS, Unifi, SnapLogic and similar) 3+ years … digital marketing analytics Experience and knowledge with marketing cloud solutions Experience in programming languages (python, Java, or Bash scripting) Experience with Big Data technologies (i.e. Hadoop, Spark, Redshift, Snowflake, Hive, Pig, etc.) Experience with Martech/Adtech tools and how to integrate technologies into the data management solution. (Adobe, Salesforce, Oracle, Google, and similar) Experience in advanced analytics\machine More ❯
technical data & marketing solution experience 5+ years of in-depth understanding of database platforms including relational and non-relational data engines (MS SQL, MySQL, PostgreSQL, MongoDB, MariaDB, Hadoop, Snowflake, Hive, Spark, Big Query, Redshift, Data Warehouse, and similar) 5+ years of experience with data processing (ETL) tools & methodologies (Informatica, Talend, Spark, Pentaho, SSIS, Unifi, SnapLogic and similar) 3+ years … digital marketing analytics Experience and knowledge with marketing cloud solutions Experience in programming languages (python, Java, or Bash scripting) Experience with Big Data technologies (i.e. Hadoop, Spark, Redshift, Snowflake, Hive, Pig, etc.) Experience with Martech/Adtech tools and how to integrate technologies into the data management solution. (Adobe, Salesforce, Oracle, Google, and similar) Experience in advanced analytics\machine More ❯
technical data & marketing solution experience 5+ years of in-depth understanding of database platforms including relational and non-relational data engines (MS SQL, MySQL, PostgreSQL, MongoDB, MariaDB, Hadoop, Snowflake, Hive, Spark, Big Query, Redshift, Data Warehouse, and similar) 5+ years of experience with data processing (ETL) tools & methodologies (Informatica, Talend, Spark, Pentaho, SSIS, Unifi, SnapLogic and similar) 3+ years … digital marketing analytics Experience and knowledge with marketing cloud solutions Experience in programming languages (python, Java, or Bash scripting) Experience with Big Data technologies (i.e. Hadoop, Spark, Redshift, Snowflake, Hive, Pig, etc.) Experience with Martech/Adtech tools and how to integrate technologies into the data management solution. (Adobe, Salesforce, Oracle, Google, and similar) Experience in advanced analytics\machine More ❯
technical data & marketing solution experience 5+ years of in-depth understanding of database platforms including relational and non-relational data engines (MS SQL, MySQL, PostgreSQL, MongoDB, MariaDB, Hadoop, Snowflake, Hive, Spark, Big Query, Redshift, Data Warehouse, and similar) 5+ years of experience with data processing (ETL) tools & methodologies (Informatica, Talend, Spark, Pentaho, SSIS, Unifi, SnapLogic and similar) 3+ years … digital marketing analytics Experience and knowledge with marketing cloud solutions Experience in programming languages (python, Java, or Bash scripting) Experience with Big Data technologies (i.e. Hadoop, Spark, Redshift, Snowflake, Hive, Pig, etc.) Experience with Martech/Adtech tools and how to integrate technologies into the data management solution. (Adobe, Salesforce, Oracle, Google, and similar) Experience in advanced analytics\machine More ❯
technical data & marketing solution experience 5+ years of in-depth understanding of database platforms including relational and non-relational data engines (MS SQL, MySQL, PostgreSQL, MongoDB, MariaDB, Hadoop, Snowflake, Hive, Spark, Big Query, Redshift, Data Warehouse, and similar) 5+ years of experience with data processing (ETL) tools & methodologies (Informatica, Talend, Spark, Pentaho, SSIS, Unifi, SnapLogic and similar) 3+ years … digital marketing analytics Experience and knowledge with marketing cloud solutions Experience in programming languages (python, Java, or Bash scripting) Experience with Big Data technologies (i.e. Hadoop, Spark, Redshift, Snowflake, Hive, Pig, etc.) Experience with Martech/Adtech tools and how to integrate technologies into the data management solution. (Adobe, Salesforce, Oracle, Google, and similar) Experience in advanced analytics\machine More ❯
Oak Brook, Illinois, United States Hybrid / WFH Options
Ace Hardware Corporation
distribution), including performing backup and restore operations and supporting development, test, and production systems. Key Responsibilities Cloudera Hadoop Administration Manage and support Cloudera Hadoop clusters and services (HDFS, YARN, Hive, Impala, Spark, Oozie, etc.). Perform cluster upgrades, patching, performance tuning, capacity planning, and health monitoring. Secure the Hadoop platform using Kerberos, Ranger, or Sentry. Develop and maintain automation … of Spark and Delta Lake architecture. Experience with IAM, Active Directory, and SSO integration. Familiarity with DevOps and CI/CD for data platforms. Deep understanding of Hadoop ecosystem: Hive, Impala, Spark, HDFS, YARN. Experience integrating data from DB2 to Hadoop/Databricks using tools like Sqoop or custom connectors. Scripting skills in Shell and/or Python for More ❯
also have Systems integration background or experience Experience of developing the Finance Data Strategy for large financial institutions, developing future state architecture Delivery experience in Big Data technologies and Apache ecosystem technologies such as Spark, Kafka, Hive etc and have experience building end to end data pipelines using on-premise or cloud-based data platforms. Hands-on experience More ❯
working within a fast-paced financial services environment. Key Responsibilities: Design, develop, and maintain applications using Scala, Python, Hadoop and Java . Work with Big Data technologies , including Spark, Hive (nice to have). Collaborate with cross-functional teams to deliver scalable, high-performance solutions. Participate in code reviews, testing, and performance optimization. Ensure best practices in coding, design … and architecture. Skills & Experience Required: 2-5 years of software development experience. Strong hands-on expertise in Scala (mandatory) , plus Python and Java . Experience with Big Data frameworks ; Apache Spark experience is an advantage. Solid understanding of software engineering principles, data structures, and algorithms. Strong problem-solving skills and ability to work in an Agile environment. Educational Criteria More ❯
working within a fast-paced financial services environment. Key Responsibilities: Design, develop, and maintain applications using Scala, Python, Hadoop and Java . Work with Big Data technologies , including Spark, Hive (nice to have). Collaborate with cross-functional teams to deliver scalable, high-performance solutions. Participate in code reviews, testing, and performance optimization. Ensure best practices in coding, design … and architecture. Skills & Experience Required: 2-5 years of software development experience. Strong hands-on expertise in Scala (mandatory) , plus Python and Java . Experience with Big Data frameworks ; Apache Spark experience is an advantage. Solid understanding of software engineering principles, data structures, and algorithms. Strong problem-solving skills and ability to work in an Agile environment. Educational Criteria More ❯