Job ID: Amazon Web Services Australia Pty Ltd Are you a Senior Cloud Architect with GenAI consulting specialist? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake, Data Engineering and GenAI experience? Do you have senior stakeholder engagement experience to support pre-sales and deliver consulting engagements? Do you like to … learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), AmazonElastic Map Reduce (EMR), Amazon Kinesis, Amazon Redshift, Amazon Athena, AWS Lake Formation, Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon … to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most More ❯
Senior Delivery Consultant -Data Analytics & GenAI, AWS Professional Services Public Sector Job ID: Amazon Web Services Australia Pty Ltd Are you a Senior Data Analytics and GenAI consulting specialist? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake, Data Engineering and GenAI experience? Do you have senior stakeholder engagement experience to … learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), AmazonElastic Map Reduce (EMR), Amazon Kinesis, Amazon Redshift, Amazon Athena, AWS Lake Formation, Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon … clearance. PREFERRED QUALIFICATIONS - AWS Professional level certification - 10+ years of IT platform implementation in a technical and analytical role experience Acknowledgement of country: In the spirit of reconciliation Amazon acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that More ❯
AWS Data Engineer London, UK Permanent Strong experience in Python, PySpark, AWS S3, AWS Glue, Databricks, Amazon Redshift … DynamoDB, CI/CD and Terraform. Total 7 + years of experience in Data engineering is required. Design, develop, and optimize ETL pipelines using AWS Glue, AmazonEMR and Kinesis for real-time and batch data processing. Implement data transformation, streaming, and storage solutions on AWS, ensuring scalability and performance. Collaborate with cross-functional teams to integrate … and manage data workflows. Skill-set Amazon Redshift, S3, AWS Glue, AmazonEMR, Kinesis Analytics Ensure data security, compliance, and best practices in cloud data engineering Experience with programming languages such as Python, Java, or Scala. More ❯
AWS Data Engineer with EMR clusters Piscataway, NJ 12 months Visa: Any visa independent. Data Pipeline Development: Design and implement robust ETL processes to extract, transform, and load data from various sources into data lakes and warehouses. AWS EMR Clusters: Configure, manage, and optimize AmazonEMR clusters for big data processing using Apache Spark, Hive … a related field. Experience: 3+ years of experience in data engineering or a related role, with a focus on AWS technologies. AWS Expertise: Proficient in AWS services such as EMR, S3, RDS, Redshift, Lambda, and CloudFormation. Kubernetes Knowledge: Experience with Kubernetes for container orchestration and microservices architecture. CI/CD Tools: Familiarity with CI/CD tools and practices More ❯
data processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc Would you like to join us as we work hard, have fun and make history? Apply for this job indicates More ❯
data processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc) Would you like to join us as we work hard, have fun and make history? Apply for this job indicates More ❯
the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and code More ❯
be a fit if you have: Expertise in Cloud-Native Data Engineering: 3+ years building and running data pipelines in AWS or Azure, including managed data services (e.g., Kinesis, EMR/Databricks, Redshift, Glue, Azure Data Lake). Programming Mastery: Advanced skills in Python or another major language; writing clean, testable, production-grade ETL code at scale. Modern Data More ❯
be a fit if you have: Expertise in Cloud-Native Data Engineering: 3+ years building and running data pipelines in AWS or Azure, including managed data services (e.g., Kinesis, EMR/Databricks, Redshift, Glue, Azure Data Lake). Programming Mastery: Advanced skills in Python or another major language; writing clean, testable, production-grade ETL code at scale. Modern Data More ❯
strong proficiency in SQL and PL/SQL, Oracle access control (roles, privileges, user management, and data security). Working knowledge of AWS core services, including S3, EC2/EMR, IAM, Athena, Glue or Redshift. Hands-on experience with Databricks Spark on large datasets, using PySpark, Scala, or SQL. Familiarity with Delta Lake, Unity Catalog or similar data lakehouse More ❯
Pytest and Postman. Expertise in Pandas, SQL, and AWS analytics services (Glue, Athena, Redshift) for data profiling, transformation, and validation within data lakes. Solid experience with AWS (S3, Lambda, EMR, ECS/EKS, CloudFormation/Terraform) and understanding of cloud-native architectures and best practices. Advanced skills in automating API and backend testing workflows, ensuring robust and reliable system More ❯
Pytest and Postman. Expertise in Pandas, SQL, and AWS analytics services (Glue, Athena, Redshift) for data profiling, transformation, and validation within data lakes. Solid experience with AWS (S3, Lambda, EMR, ECS/EKS, CloudFormation/Terraform) and understanding of cloud-native architectures and best practices. Advanced skills in automating API and backend testing workflows, ensuring robust and reliable system More ❯
to share knowledge with your peers) Nice To Have Knowledge of systems design within a modern cloud-based environment (AWS, GCP) including AWS primitives such as IAM, S3, RDS, EMR, ECS and more Advanced experience working and understanding the tradeoffs of at least one of the following Data Lake table/file formats: Delta Lake, Parquet, Iceberg, Hudi Previous More ❯
optimised and efficient data marts and warehouses in the cloud) Work with Infrastructure as code (Terraform) and containerising applications (Docker) Work with AWS, S3, SQS, Iceberg, Parquet, Glue and EMR for our Data Lake Experience developing CI/CD pipelines More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for More ❯
disaster-recovery drills for stream and batch environments. Architecture & Automation Collaborate with data engineering and product teams to architect scalable, fault-tolerant pipelines using AWS services (e.g., Step Functions , EMR , Lambda , Redshift ) integrated with Apache Flink and Kafka . Troubleshoot & Maintain Python -based applications. Harden CI/CD for data jobs: implement automated testing of data schemas, versioned Flink More ❯
APIs & micro services-based solutions. Basic Knowledge of User Interface design & development using Angular, React, HTML5, XML & CSS Working knowledge in AWS cloud (EC2, ECS, Load Balancer, Security Group, EMR, Lambda, S3, Glue, etc.) Experience in DevOps development and deployment using docker and containers. Domain knowledge in Financial Industry and Capital Markets is a plus. Bachelor's degree in More ❯
APIs & micro services-based solutions. Basic Knowledge of User Interface design & development using Angular, React, HTML5, XML & CSS Working knowledge in AWS cloud (EC2, ECS, Load Balancer, Security Group, EMR, Lambda, S3, Glue, etc.) Experience in DevOps development and deployment using containers. Domain knowledge in Financial Industry and Capital Markets is a plus. Bachelor's degree in computer science More ❯
with strong potential to extend Working Model: Hybrid Key Skills (in order of priority): SQL (advanced) Python SAS Building and maintaining data pipelines (ideally using AWS: S3, Glue, Athena, EMR) Spark, Hadoop, or other big data frameworks Data modelling (star/snowflake schemas), governance, and compliance awareness Join a collaborative, agile team delivering critical data infrastructure for a major More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment
Experience in a data-focused SRE, Data Platform, or DevOps role *Strong knowledge of Apache Flink, Kafka, and Python in production environments *Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) *Comfortable with monitoring tools, distributed systems debugging, and incident response Reference Number: BBBH(phone number removed) To apply for this role or for to be More ❯
Employment Type: Permanent
Salary: £80000 - £90000/annum 38 Days Holiday, Healthcare, Pension
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
Experience in a data-focused SRE, Data Platform, or DevOps role*Strong knowledge of Apache Flink, Kafka, and Python in production environments*Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.)*Comfortable with monitoring tools, distributed systems debugging, and incident response Reference Number: BBBH259303 To apply for this role or for to be considered for further More ❯
Software Engineering using a high level language like Go, Java, JavaScript, Python Distributed Software Architecture exposure in high volume production scenarios Working with Data Mesh, BigData technologies such as EMR, Spark, Databricks Designing, tracking and testing to SLOs and Chaos Engineering to Error Budgets Implementing Business Continuity (BCP) and Disaster Recovery (DRP) plans including tracking RTO and RPO CryptoCurrency More ❯
pragmatic and open-minded approach to achieving outcomes in the simplest way possible. Have worked with stream processing technologies (i.e. Apache Kafka). Have experience with AWS services especially EMR & ECS. Are passionate about software quality, DevOps (i.e. Terraform) and automation. Work well in lean, agile, cross-functional product teams using Scrum and Kanban practices. Are a good communicator More ❯
Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies … such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and … 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases/data stores (object storage, document or key-value stores, graph databases, column-family databases) Our More ❯