Proficiency in writing and optimizing SQL Knowledge of AWS services including S3, Redshift, EMR, Kinesis and RDS Experience with Open Source Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) Ability to write code in Python, Ruby, Scala or other platform-related Big data technology Knowledge of professional software engineering More ❯
to streamline data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera - Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, or a More ❯
Core Account team within Services group (TTS) and is responsible for building a scalable, high-performance data platform on Big Data technologies (Spark, Scala, Hive, Hadoop) along with Kafka/Java and AI technologies to support core account data needs across multiple lines of businesses. As a tenant on More ❯
distributed systems as it pertains to data storage and computing Experience with Redshift, Oracle, NoSQL etc. Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Bachelor's degree PREFERRED QUALIFICATIONS Experience working on and delivering end to end projects independently Experience providing technical leadership and mentoring other More ❯
computer science, mathematics, or a related quantitative field - Experience with scripting languages (e.g., Python, Java, R) and big data technologies/languages (e.g. Spark, Hive, Hadoop, PyTorch, PySpark) PREFERRED QUALIFICATIONS - Master's degree, or Advanced technical degree - Knowledge of data modeling and data pipeline design - Experience with statistical analysis More ❯
Experience in working with data visualization tools Experience in GCP tools – Cloud Function, Dataflow, Dataproc and Bigquery Experience in data processing framework – Beam, Spark, Hive, Flink GCP data engineering certification is a merit Have hands on experience in Analytical tools such as powerBI or similar visualization tools Exhibit understanding More ❯
Experience as a Data Engineer for Cloud Data Lake activities, especially in high-volume data processing frameworks, ETL development using distributed computing frameworks like Apache Spark, Hadoop, Hive. Experience optimizing database performance, scalability, data security, and compliance. Experience with event-based, micro-batch, and batched high-volume, high-velocity More ❯
Experience in working with data visualization tools Experience in GCP tools – Cloud Function, Dataflow, Dataproc and Bigquery Experience in data processing framework – Beam, Spark, Hive, Flink GCP data engineering certification is a merit Have hands on experience in Analytical tools such as powerBI or similar visualization tools Exhibit understanding More ❯
such as Python, Java, Scala, or NodeJS - Experience mentoring team members on best practices PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace More ❯
and migration of these data warehouses to modern cloud data platforms. Deep understanding and hands-on experience with big data technologies like Hadoop, HDFS, Hive, Spark and cloud data platform services. Proven track record of designing and implementing large-scale data architectures in complex environments. CICD/DevOps experience More ❯
MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results More ❯
the big 3 cloud ML stacks (AWS, Azure, GCP). Hands-on experience with open-source ETL, and data pipeline orchestration tools such as Apache Airflow and Nifi. Experience with large scale/Big Data technologies, such as Hadoop, Spark, Hive, Impala, PrestoDb, Kafka. Experience with workflow orchestration … tools like Apache Airflow. Experience with containerisation using Docker and deployment on Kubernetes. Experience with NoSQL and graph databases. Unix server administration and shell scripting experience. Experience in building scalable data pipelines for highly unstructured data. Experience in building DWH and data lakes architectures. Experience in working in cross More ❯
like Python or KornShell. Knowledge of writing and optimizing SQL queries for large-scale, complex datasets. Experience with big data technologies such as Hadoop, Hive, Spark, EMR. Experience with ETL tools like Informatica, ODI, SSIS, BODI, or DataStage. Our inclusive culture empowers Amazon employees to deliver the best results More ❯
SparkSQL, Scala). Experience with one or more scripting language (e.g., Python, KornShell). PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR. Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results More ❯
e.g. Python, R, Scala, etc.; (Python preferred). Proficiency in database technologies e.g. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and unstructured data e.g. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and how More ❯
eg. Python, R, Scala, etc.; (Python preferred). Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and More ❯
data structures. Encouraging self-learning among the team. Essential Skills & Qualifications: A confident engineer with an authoritative knowledge of Java and Hadoop including HDFS, Hive, and Spark. Comfortable working with large data volumes and able to demonstrate a firm understanding of logical data structures and analysis techniques. Strong skills More ❯
in coding languages e.g. Python, C++, etc.; (Python preferred). Proficiency in database technologies e.g. SQL, No-SQL and Big Data technologies e.g. pySpark, Hive, etc. Experience working with structured and unstructured data e.g. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and how More ❯
independently while also thriving in a collaborative team environment. Experience with GenAI/LLMs projects. Familiarity with distributed data/computing tools (e.g., Hadoop, Hive, Spark, MySQL). Background in financial services, including banking or risk management. Knowledge of capital markets and financial instruments, along with modelling expertise. If More ❯
and building ETL pipelines - Experience with SQL - Experience mentoring team members on best practices PREFERRED QUALIFICATIONS - Experience with big data technologies such as Hadoop, Hive, Spark, EMR - Experience operating large data warehouses Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to More ❯
Lincoln, Lincolnshire, United Kingdom Hybrid / WFH Options
Adecco
CRITERIA Degree in Computer Science, Information Systems, or a related field. or a combination of education and relevant experience Query languages e.g. SQL, Java, Hive, R Data Management technologies e.g. ETL tools, data integration platforms Proven experience as a Data Architect, Data Engineer (or a related role, with a More ❯
experience working with relational and non-relational databases (e.g. Snowflake, BigQuery, PostgreSQL, MySQL, MongoDB). Hands-on experience with big data technologies such as Apache Spark, Kafka, Hive, or Hadoop. Proficient in at least one programming language (e.g. Python, Scala, Java, R). Experience deploying and maintaining cloud More ❯
East London, London, United Kingdom Hybrid / WFH Options
McGregor Boyall Associates Limited
and training techniques . Experience deploying models in production environments. Nice to Have: Experience in GenAI/LLMs Familiarity with distributed computing tools (Hadoop, Hive, Spark). Background in banking, risk management, or capital markets . Why Join? This is a unique opportunity to work at the forefront of More ❯
and building ETL pipelines Experience with SQL Experience mentoring team members on best practices PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience operating large data warehouses Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to More ❯
engineers on the team to elevate technology and consistently apply best practices. Qualifications for Software Engineer Hands-on experience working with technologies like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume, etc. Strong DevOps focus and experience building and deploying infrastructure with cloud deployment technologies like Ansible More ❯