Job ID: Amazon Web Services Australia Pty Ltd Are you a Senior Cloud Architect with GenAI consulting specialist? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake, Data Engineering and GenAI experience? Do you have senior stakeholder engagement experience to support pre-sales and deliver consulting engagements? Do you like to … learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), AmazonElastic Map Reduce (EMR), Amazon Kinesis, Amazon Redshift, Amazon Athena, AWS Lake Formation, Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon … to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most More ❯
Senior Delivery Consultant -Data Analytics & GenAI, AWS Professional Services Public Sector Job ID: Amazon Web Services Australia Pty Ltd Are you a Senior Data Analytics and GenAI consulting specialist? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake, Data Engineering and GenAI experience? Do you have senior stakeholder engagement experience to … learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), AmazonElastic Map Reduce (EMR), Amazon Kinesis, Amazon Redshift, Amazon Athena, AWS Lake Formation, Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon … clearance. PREFERRED QUALIFICATIONS - AWS Professional level certification - 10+ years of IT platform implementation in a technical and analytical role experience Acknowledgement of country: In the spirit of reconciliation Amazon acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that More ❯
Delivery Consultant - Machine Learning (GenAI), ProServe SDT North Job ID: Amazon Web Services EMEA SARL, Dutch Branch AWS Professional Services is a unique organization. Our customers are most-advanced companies in the world. We build for them world-class, cloud-native IT solutions to solve real business problems and we help them get business outcomes with AWS. Our … projects are often unique, one-of-a-kind endeavors that no one ever has done before. At Amazon Web Services (AWS), we are helping large enterprises build AI solutions on the AWS Cloud. We are applying predictive technology to large volumes of data and against a wide spectrum of problems. AWS Professional Services works together with AWS customers … Compute (EC2), Amazon Data Pipeline, Amazon S3, Glue, Amazon DynamoDB, Amazon Relational Database Service (RDS), AmazonElastic Map Reduce (EMR), Amazon Kinesis, AWS Lake Formation and other AWS services. You will collaborate across the whole AWS organization, with other consultants, customer teams and partners on proof-of More ❯
AWS Data Engineer London, UK Permanent Strong experience in Python, PySpark, AWS S3, AWS Glue, Databricks, Amazon Redshift … DynamoDB, CI/CD and Terraform. Total 7 + years of experience in Data engineering is required. Design, develop, and optimize ETL pipelines using AWS Glue, AmazonEMR and Kinesis for real-time and batch data processing. Implement data transformation, streaming, and storage solutions on AWS, ensuring scalability and performance. Collaborate with cross-functional teams to integrate … and manage data workflows. Skill-set Amazon Redshift, S3, AWS Glue, AmazonEMR, Kinesis Analytics Ensure data security, compliance, and best practices in cloud data engineering Experience with programming languages such as Python, Java, or Scala. More ❯
data processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc Would you like to join us as we work hard, have fun and make history? Apply for this job indicates More ❯
data processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc) Would you like to join us as we work hard, have fun and make history? Apply for this job indicates More ❯
the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and code More ❯
be a fit if you have: Expertise in Cloud-Native Data Engineering: 3+ years building and running data pipelines in AWS or Azure, including managed data services (e.g., Kinesis, EMR/Databricks, Redshift, Glue, Azure Data Lake). Programming Mastery: Advanced skills in Python or another major language; writing clean, testable, production-grade ETL code at scale. Modern Data More ❯
be a fit if you have: Expertise in Cloud-Native Data Engineering: 3+ years building and running data pipelines in AWS or Azure, including managed data services (e.g., Kinesis, EMR/Databricks, Redshift, Glue, Azure Data Lake). Programming Mastery: Advanced skills in Python or another major language; writing clean, testable, production-grade ETL code at scale. Modern Data More ❯
optimised and efficient data marts and warehouses in the cloud) Work with Infrastructure as code (Terraform) and containerising applications (Docker) Work with AWS, S3, SQS, Iceberg, Parquet, Glue and EMR for our Data Lake Experience developing CI/CD pipelines More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for More ❯
disaster-recovery drills for stream and batch environments. Architecture & Automation Collaborate with data engineering and product teams to architect scalable, fault-tolerant pipelines using AWS services (e.g., Step Functions , EMR , Lambda , Redshift ) integrated with Apache Flink and Kafka . Troubleshoot & Maintain Python -based applications. Harden CI/CD for data jobs: implement automated testing of data schemas, versioned Flink More ❯
buy online. By giving customers more of what they want - low prices, vast selection, and convenience - continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world … come to to research and develop technology that improves the lives of shoppers and sellers around the world. About Team The RBS team is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best price, wide selection and … good product information. The team's primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled by this group have a direct impact on customer buying decisions and online user experience. Overview of the role: An candidate will be a self-starter who is passionate about discovering and solving complicated More ❯
with strong potential to extend Working Model: Hybrid Key Skills (in order of priority): SQL (advanced) Python SAS Building and maintaining data pipelines (ideally using AWS: S3, Glue, Athena, EMR) Spark, Hadoop, or other big data frameworks Data modelling (star/snowflake schemas), governance, and compliance awareness Join a collaborative, agile team delivering critical data infrastructure for a major More ❯
Experience in a data-focused SRE, Data Platform, or DevOps role Strong knowledge of Apache Flink, Kafka, and Python in production environments Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) Comfortable with monitoring tools, distributed systems debugging, and incident response Reference Number: BBBH259303 To apply for this role or for to be considered for further More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
Experience in a data-focused SRE, Data Platform, or DevOps role *Strong knowledge of Apache Flink, Kafka, and Python in production environments *Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) *Comfortable with monitoring tools, distributed systems debugging, and incident response Reference Number: BBBH259303 To apply for this role or for to be considered for further More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
Experience in a data-focused SRE, Data Platform, or DevOps role*Strong knowledge of Apache Flink, Kafka, and Python in production environments*Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.)*Comfortable with monitoring tools, distributed systems debugging, and incident response Reference Number: BBBH259303 To apply for this role or for to be considered for further More ❯
Software Engineering using a high level language like Go, Java, JavaScript, Python Distributed Software Architecture exposure in high volume production scenarios Working with Data Mesh, BigData technologies such as EMR, Spark, Databricks Designing, tracking and testing to SLOs and Chaos Engineering to Error Budgets Implementing Business Continuity (BCP) and Disaster Recovery (DRP) plans including tracking RTO and RPO CryptoCurrency More ❯
pragmatic and open-minded approach to achieving outcomes in the simplest way possible. Have worked with stream processing technologies (i.e. Apache Kafka). Have experience with AWS services especially EMR & ECS. Are passionate about software quality, DevOps (i.e. Terraform) and automation. Work well in lean, agile, cross-functional product teams using Scrum and Kanban practices. Are a good communicator More ❯
Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies … such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and … 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases/data stores (object storage, document or key-value stores, graph databases, column-family databases) Our More ❯
in building data and science solutions to drive strategic direction? Based in Tokyo, the Science and Data Technologies team designs, builds, operates, and scales the data infrastructure powering Amazon's retail business in Japan. Working with a diverse, global team serving customers and partners worldwide, you can make a significant impact while continuously learning and experimenting with cutting … software engineers and business teams to identify and implement strategic data opportunities. Key job responsibilities Your key responsibilities include: - Create data solutions with AWS services such as Redshift, S3, EMR, Lambda, SageMaker, CloudWatch etc. - Implement robust data solutions and scalable data architectures. - Develop and improve the operational excellence, data quality, monitoring and data governance. BASIC QUALIFICATIONS - Bachelor's degree … 3+ years of experience with data modeling, data warehousing, ETL/ELT pipelines and BI tools. - Experience with cloud-based big data technology stacks (e.g., Hadoop, Spark, Redshift, S3, EMR, SageMaker, DynamoDB etc.) - Knowledge of data management and data storage principles. - Expert-level proficiency in writing and optimizing SQL. - Ability to write code in Python for data processing. - Business More ❯
urgency and importance of what we're doing for society. First month - some examples of what to expect: Help add further key third party API integrations, including with legacy EMR systems and national APIs like e.g. the electronic prescribing service, allowing Anima to directly issue prescriptions. Iterate on a proprietary graph traversal algorithm to improve patient care and clinical More ❯
urgency and importance of what we're doing for society. First month - some examples of what to expect: Help add further key third party API integrations, including with legacy EMR systems and national APIs like e.g. the electronic prescribing service, allowing Anima to directly issue prescriptions. Iterate on a proprietary graph traversal algorithm to improve patient care and clinical More ❯
Amazon IN Platform Development team is looking to hire a rock star Data/BI Engineer to build for pan Amazon India businesses. Amazon India is at the core of WW today and the team is charted with democratizing data access for the entire marketplace & add productivity. That translates to owning the processing of … every Amazon India transaction, for which the team is organized to have dedicated business owners & processes for each focus area. The BI Engineer will play a key role in contributing to the success of each focus area, by partnering with respective business owners and leveraging data to identify areas of improvement & optimization. He/She will build deliverables … 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases/data stores (object storage, document or key-value stores, graph databases, column-family databases) Our More ❯