latest data analytics technologies? Would you like a career path that enables you to progress with the rapid adoption of cloud computing? At Amazon Web Services, we're hiring highly technical cloud architect specialised in data analytics to collaborate with our customers and partners to derive business value … as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), AmazonElastic Map Reduce (EMR), Amazon Kinesis, Amazon Redshift, Amazon Athena, AWS Lake Formation, Amazon DataZone, Amazon SageMaker and … Amazon Quicksight. Solutions - Deliver technical engagements with partners and customers. This includes participating in pre-sales visits, understanding customer requirements, creating consulting proposals and creating packaged data analytics service offerings. Delivery - Engagements include projects proving the use of AWS services to support new distributed computing solutions that often More ❯
them achieve business outcomes with AWS. Our projects are often unique, one-of-a-kind endeavors that no one has done before. At Amazon Web Services (AWS), we are helping large enterprises build AI solutions on the AWS Cloud. We apply predictive technology to large volumes of data … customers. You will leverage the global scale, elasticity, automation, and high-availability features of the AWS platform. You will build customer solutions with Amazon SageMaker, Amazon Bedrock, AmazonElastic Compute (EC2), Amazon … Data Pipeline, Amazon S3, Glue, Amazon DynamoDB, Amazon Relational Database Service (RDS), AmazonElastic Map Reduce (EMR), Amazon Kinesis, AWS Lake Formation, and other AWS services. You will collaborate across the whole AWS organization, with other consultants, customer teams More ❯
various data sources to target destinations using AWS services. Utilize AWS services such as AWS Glue, Amazon Kinesis, and AmazonEMR to build and manage efficient data ingestion and processing workflows. Transform raw data into usable formats, applying programming concepts and best practices to ensure … data quality and consistency. Design and implement effective data models and select appropriate data storage solutions based on business requirements, including Amazon S3, Amazon Redshift, and/or Amazon DynamoDB. Establish and maintain a comprehensive data catalog, documenting data schemas and managing data lifecycles. … minimum of 4 years of professional experience in data engineering or data architecture. At least 2 years of hands-on experience working with Amazon Web Services (AWS). Proven experience in setting up and maintaining ETL pipelines using AWS services like AWS Glue, Amazon Kinesis, and More ❯
Associate Data Analytics Consultant, A2C Job ID: Amazon Web Services Korea LLC Are you a Data Analytics specialist? Do you have Data Warehousing and/or Hadoop experience? Do you like to solve the most complex and high scale data challenges in the world today? Would you like … such as AmazonElastic Compute Cloud (EC2), Amazon Data Pipeline, S3, DynamoDB NoSQL, Relational Database Service (RDS), Elastic Map Reduce (EMR) and Amazon Redshift. - Deliver on-site technical engagements with partners and customers. This includes participating in pre-sales on-site visits, understanding … following: Apache Spark/Hadoop ,Flume, Kinesis, Kafka, Oozie, Hue, Zookeeper, Ranger, Elasticsearch, Avro, Hive, Pig, Impala, Spark SQL, Presto, PostgreSQL, AmazonEMR,Amazon Redshift . Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and More ❯
into AWS data stores for both batch and streaming data ingestion. AWS Expertise: Utilize your expertise in AWS services such as AmazonEMR, S3, AWS Glue, Amazon Redshift, AWS Lambda, and more to build and optimize data solutions. Data Modeling: Design and implement data models … deliver high-quality data solutions. Automation: Implement automation processes and best practices to streamline data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera - Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor More ❯
implementation of complex data pipelines and ETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data lineage documentation. Collaborate with data analysts data scientists business stakeholders and product More ❯
in implementing cloud based data solutions using AWS services such as EC2, S3, EKS, Lambda, API Gateway, Glue and bid data tools like Spark, EMR, Hadoop etc. Hands on experience on data profiling, data modeling and data engineering using relational databases like Snowflake, Oracle, SQL Server; ETL tools like More ❯
Nice If You Have: Experience with deploying analytics workloads on platform as a service ( PaaS ) and sof tware as a service ( SaaS ) , including AWS EMR, Redshift, or SageMaker or Azure Databricks, SQL Data Warehouse, or Machine Learning service Experience with distributed or parallel programming frameworks, including Apache Spark or More ❯
Milton Keynes, Buckinghamshire, United Kingdom Hybrid / WFH Options
Banco Santander SA
Previous experience developing, testing, and deploying data pipelines, data lakes, data warehouses, and data marts using ideally AWS services such as S3, Glue, Athena, EMR, Kinesis, and Lambda. Experience of major operating platforms and their linkage, connectivity functions and issues. Cloud Security - Experience/skills of addressing concept issues More ❯
East London, London, United Kingdom Hybrid / WFH Options
Asset Resourcing
and database management systems (e.g., MySQL, PostgreSQL, MongoDB). Hands-on experience with AWS services for data processing, storage and analysis (e.g., S3, Redshift, EMR, Glue). Familiarity with programming languages such as Python or Java. Understanding of data warehousing concepts and data modeling techniques. Experience working with big More ❯
Amazon Marketing team is working on a platform that will provide real-time insights of traffic, sales, deals, and engagement and is looking for a rock star data engineer to build this. At Amazon, understanding customer data is paramount to our success in making it Earth … effective marketing organization. This cuts across all of the investments we do across various verticals of A.in, Prime, Fashion, Fresh, Pay, etc. At Amazon, you will be working in one of the world's largest and most complex data environments. You will be part of a team that … DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting languages (e.g., Python, KornShell) - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases/data stores (object storage, document or key-value stores, graph More ❯
problems in a challenging and fun environment using some of the latest Big Data open-source technologies like Apache Spark as well as Amazon Web Service technologies including ElasticMapReduce Athena and Lambda to develop scalable data solutions. • Adhering to Company Policies and Procedures with respect to Security More ❯
problems in a challenging and fun environment using some of the latest Big Data open-source technologies like Apache Spark as well as Amazon Web Service technologies including ElasticMapReduce Athena and Lambda to develop scalable data solutions. • Adhering to Company Policies and Procedures with respect to Security More ❯
with big data tools: Hadoop, Spark, etc. Experience with relational SQL and NoSQL databases, including Postgres Experience with AWS cloud or remote services: EC2, EMR, RDS, Redshift Experience with stream-processing systems: Kafka, Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Julia, Java More ❯
and database management systems (e.g., MySQL, PostgreSQL, MongoDB). Hands-on experience with AWS services for data processing, storage and analysis (e.g., S3, Redshift, EMR, Glue). Familiarity with programming languages such as Python or Java. Understanding of data warehousing concepts and data modeling techniques. Experience working with big More ❯
Pandas, SQL, and AWS analytics services (Glue, Athena, Redshift) for data profiling, transformation, and validation within data lakes. Solid experience with AWS (S3, Lambda, EMR, ECS/EKS, CloudFormation/Terraform) and understanding of cloud-native architectures and best practices. Advanced skills in automating API and backend testing workflows More ❯
Pandas, SQL, and AWS analytics services (Glue, Athena, Redshift) for data profiling, transformation, and validation within data lakes. Solid experience with AWS (S3, Lambda, EMR, ECS/EKS, CloudFormation/Terraform) and understanding of cloud-native architectures and best practices. Advanced skills in automating API and backend testing workflows More ❯
Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks More ❯
and unstructured data from multiple sources, ensuring efficient data ingestion, transformation, and storage. Develop and optimize data lake and data warehouse solutions using Amazon S3, Redshift, Athena, and Lake Formation. Implement data governance, security, and compliance best practices, including IAM roles, encryption, and access controls. Monitor and optimize … cloud technologies. Proficiency in Python, PySpark, SQL, and AWS Glue for ETL development. Hands-on experience with AWS data services, including Redshift, Athena, Glue, EMR, and Kinesis. Strong knowledge of data modeling, warehousing, and schema design. Experience with event-driven architectures, streaming data, and real-time processing using Kafka More ❯
Central London, London, United Kingdom Hybrid / WFH Options
167 Solutions Ltd
and lakehouse solutions for analytics, reporting, and machine learning. Implement ETL/ELT processes using tools such as Apache Airflow, AWS Glue, and Amazon Athena . Work with cloud-native technologies to support scalable, serverless architectures. Collaborate with data science teams to streamline feature engineering and model deployment. More ❯
DESIRED QUALIFICATIONS AWS Certification: Solutions Architect Professional or equivalent Demonstrated experience with cloud technologies used with ETL and big data pipelines such as Glue, EMR, HIVE, Spark, S3, SQS, and SNS Demonstrated experience with Kubernetes or EKS Demonstrated experience using the AWS CDK More ❯
Proficiency in coding one or more languages such as Java, Python or PySpark Experience in Cloud implementation with AWS Data Services, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Lambda, Step Functions, Event Bridge, ECS, Data De/Serialization, Parquet, JSON format, IAM Services, Encryption, KMS, Secrets Manager. Practical More ❯
and Airflow. Experience with CI/CD and infrastructure-as-code, ideally within AWS cloud. Also desirable - familiarity with AWS' data tools such as EMR, MWAA, MSK. You are Someone who loves collaboration - our teams are cross functional and you'll be working with other engineers, team leads and More ❯
SQL queries and good understanding of data model, python knowledge is a plus. Working knowledge in AWS cloud (EC2, ECS, Load Balancer, Security Group, EMR, Lambda, S3, Glue, etc.) Experience in DevOps development and deployment using ontainer platforms About S&P Global Ratings At S&P Global Ratings, our More ❯