Job ID: Amazon Web Services Australia Pty Ltd Are you a Senior Cloud Architect with GenAI consulting specialist? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake, Data Engineering and GenAI experience? Do you have senior stakeholder engagement experience to support pre-sales and deliver consulting engagements? Do you like to … learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), AmazonElastic Map Reduce (EMR), Amazon Kinesis, Amazon Redshift, Amazon Athena, AWS Lake Formation, Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon … to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most More ❯
Location: (Hybrid) London, UK Job Type: Contract Job description Experience with AWS, Python & Azure Databricks is mandatory. Design, develop, and optimize ETL pipelines using AWS Glue, AmazonEMR and Kinesis for real-time and batch data processing. Implement data transformation, streaming, and storage solutions on AWS, ensuring scalability and performance. Collaborate with cross-functional teams to integrate … and manage data workflows. Skill set- Amazon Redshift, S3, AWS Glue, AmazonEMR, Kinesis Analytics Ensure data security, compliance, and best practices in cloud data engineering More ❯
City of London, London, United Kingdom Hybrid / WFH Options
TAGMATIX360
Location: (Hybrid) London, UK Job Type: Contract Job description Experience with AWS, Python & Azure Databricks is mandatory. Design, develop, and optimize ETL pipelines using AWS Glue, AmazonEMR and Kinesis for real-time and batch data processing. Implement data transformation, streaming, and storage solutions on AWS, ensuring scalability and performance. Collaborate with cross-functional teams to integrate … and manage data workflows. Skill set- Amazon Redshift, S3, AWS Glue, AmazonEMR, Kinesis Analytics Ensure data security, compliance, and best practices in cloud data engineering More ❯
data processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc Would you like to join us as we work hard, have fun and make history? Apply for this job indicates More ❯
data processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc) Would you like to join us as we work hard, have fun and make history? Apply for this job indicates More ❯
. Architect and implement integration solutions using modern tech stacks including Kafka , Kinesis , and API Gateway . Process and analyze structured and unstructured data using Spark , AmazonEMR , and PySpark . Manage and monitor AWS infrastructure including EC2, ECS, Lambda, S3, RDS, DynamoDB, Glue ETL, Redshift, and CloudFormation. Implement CI/CD pipelines using Jenkins , Terraform , Gradle More ❯
verbal skills, able to translate concepts into easily understood diagrams and visuals for both technical and non-technical people alike. AWS cloud products (Lambda functions, Redshift, S3, AmazonMQ, Kinesis, EMR, RDS (Postgres . Apache Airflow for orchestration. DBT for data transformations. Machine Learning for product insights and recommendations. Experience with microservices using technologies like Docker for local development. Apply More ❯
Manager, Config, CloudTrail, Puppet PE. Monitor performance against operational metrics. Manage security and identity with AWS IAM, KMS, Certificate Manager, Secrets Manager. Support AWS analytics solutions in production - AWS EMR, Data Pipeline, RedShift. Handle storage and backup in hybrid cloud environments. Participate in on-call rotation for 24/7 support of production environments. Required Skills and Experience: Bachelor More ❯
the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and code More ❯
be a fit if you have: Expertise in Cloud-Native Data Engineering: 3+ years building and running data pipelines in AWS or Azure, including managed data services (e.g., Kinesis, EMR/Databricks, Redshift, Glue, Azure Data Lake). Programming Mastery: Advanced skills in Python or another major language; writing clean, testable, production-grade ETL code at scale. Modern Data More ❯
Pytest and Postman. Expertise in Pandas, SQL, and AWS analytics services (Glue, Athena, Redshift) for data profiling, transformation, and validation within data lakes. Solid experience with AWS (S3, Lambda, EMR, ECS/EKS, CloudFormation/Terraform) and understanding of cloud-native architectures and best practices. Advanced skills in automating API and backend testing workflows, ensuring robust and reliable system More ❯
Experience with proactive management and team ownership of cloud infrastructure Beneficial Experience: AWS certifications Familiarity with SIEM solutions and Security Incident Management Cybersecurity awareness or certification Data engineering familiarity (EMR, ETL) Coaching or mentoring experience Key Behaviours: Excellent problem-solving skills Flexibility to experiment and adapt quickly based on results Strong team collaboration and communication skills Proactive ownership of More ❯
to share knowledge with your peers) Nice To Have Knowledge of systems design within a modern cloud-based environment (AWS, GCP) including AWS primitives such as IAM, S3, RDS, EMR, ECS and more Advanced experience working and understanding the tradeoffs of at least one of the following Data Lake table/file formats: Delta Lake, Parquet, Iceberg, Hudi Previous More ❯
optimised and efficient data marts and warehouses in the cloud) Work with Infrastructure as code (Terraform) and containerising applications (Docker) Work with AWS, S3, SQS, Iceberg, Parquet, Glue and EMR for our Data Lake Experience developing CI/CD pipelines More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for More ❯
practices for greater efficiency and impact. The skills you'll need to succeed Leadership in data engineering and Agile delivery Advanced knowledge of AWS data services (e.g. S3, Glue, EMR, Lambda, Redshift) Expertise in big data technologies and distributed systems Strong coding and optimisation skills (e.g. Python, Spark, SQL) Data quality management and observability Strategic thinking and solution architecture More ❯
Investment Management apparatus at Vanguard. The role involves working with Vanguard's funds data and using leading technologies with Cloud Native architecture - Java (Springboot), AWS services (IAM, S3, ECS, EMR, Lambda, Athena, DynamoDb, etc.) and Python. The team uses agile methodologies and operates a continuous delivery pipeline, deploying daily. Vanguard is one of the world's largest investment management More ❯
disaster-recovery drills for stream and batch environments. Architecture & Automation Collaborate with data engineering and product teams to architect scalable, fault-tolerant pipelines using AWS services (e.g., Step Functions , EMR , Lambda , Redshift ) integrated with Apache Flink and Kafka . Troubleshoot & Maintain Python -based applications. Harden CI/CD for data jobs: implement automated testing of data schemas, versioned Flink More ❯
Falls Church, Virginia, United States Hybrid / WFH Options
Rackner
moving Agile DevSecOps team that builds secure, scalable data platforms-and get paid weekly. What You'll Do Build OpenAPI-compliant APIs, data schemas, & pipelines in AWS (S3, RDS, EMR) Develop with Python (FastAPI, Django, Flask) & JavaScript (Node.js, Vue, React) Deploy containerized workloads in Kubernetes (AWS EKS, Rancher) with CI/CD Apply DevSecOps + security-first practices from More ❯
Experience in a data-focused SRE, Data Platform, or DevOps role Strong knowledge of Apache Flink, Kafka, and Python in production environments Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) Comfortable with monitoring tools, distributed systems debugging, and incident response Reference Number: BBBH259303 To apply for this role or for to be considered for further More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
Experience in a data-focused SRE, Data Platform, or DevOps role *Strong knowledge of Apache Flink, Kafka, and Python in production environments *Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) *Comfortable with monitoring tools, distributed systems debugging, and incident response Reference Number: BBBH259303 To apply for this role or for to be considered for further More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
Experience in a data-focused SRE, Data Platform, or DevOps role*Strong knowledge of Apache Flink, Kafka, and Python in production environments*Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.)*Comfortable with monitoring tools, distributed systems debugging, and incident response Reference Number: BBBH259303 To apply for this role or for to be considered for further More ❯