Job ID: Amazon Web Services Australia Pty Ltd Are you a Senior Cloud Architect with GenAI consulting specialist? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake, Data Engineering and GenAI experience? Do you have senior stakeholder engagement experience to support pre-sales and deliver consulting engagements? Do you like to solve … Vetting Agency clearance (see ). Key job responsibilities Expertise: Collaborate with pre-sales and delivery teams to help partners and customers learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), Amazon Elastic Map Reduce (EMR), Amazon Kinesis, AmazonRedshift, Amazon Athena, AWS Lake Formation … Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon Bedrock. Solutions: Support pre-sales and deliver technical engagements with partners and customers. This includes participating in pre-sales visits, understanding customer requirements, creating consulting proposals and creating packaged data analytics service offerings. Delivery: Engagements include projects proving the use of AWS services to support new distributed computing More ❯
Senior Delivery Consultant -Data Analytics & GenAI, AWS Professional Services Public Sector Job ID: Amazon Web Services Australia Pty Ltd Are you a Senior Data Analytics and GenAI consulting specialist? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake, Data Engineering and GenAI experience? Do you have senior stakeholder engagement experience to support … decisions and desired customer outcomes. Key job responsibilities Expertise: Collaborate with pre-sales and delivery teams to help partners and customers learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), Amazon Elastic Map Reduce (EMR), Amazon Kinesis, AmazonRedshift, Amazon Athena, AWS Lake Formation … Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon Bedrock. Solutions: Support pre-sales and deliver technical engagements with partners and customers. This includes participating in pre-sales visits, understanding customer requirements, creating consulting proposals and creating packaged data analytics service offerings. Delivery: Engagements include projects proving the use of AWS services to support new distributed computing More ❯
Job ID: Amazon Web Services Australia Pty Ltd Are you a Cloud Architect with GenAI experience? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake and Data Engineering experience? Do you like to solve the most complex and high scale (billions+ records) data challenges in the world today? Do you like leading … Vetting Agency clearance (see ). Key job responsibilities Expertise: Collaborate with pre-sales and delivery teams to help partners and customers learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), Amazon Elastic Map Reduce (EMR), Amazon Kinesis, AmazonRedshift, Amazon Athena, AWS Lake Formation … Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon Bedrock. Solutions: Deliver technical engagements with partners and customers. This includes participating in pre-sales visits, understanding customer requirements, creating consulting proposals and creating packaged data analytics service offerings. Delivery: Engagements include projects proving the use of AWS services to support new distributed computing solutions that often span More ❯
Delivery Consultant - Data Analytics & GenAI, AWS Professional Services Public Sector Job ID: Amazon Web Services Australia Pty Ltd Are you a Data Analytics and GenAI specialist? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake, Data Engineering and GenAI experience? Do you like to solve the most complex and high scale (billions+ … decisions and desired customer outcomes. Key job responsibilities: Expertise: Collaborate with pre-sales and delivery teams to help partners and customers learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), Amazon Elastic Map Reduce (EMR), Amazon Kinesis, AmazonRedshift, Amazon Athena, AWS Lake Formation … Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon Bedrock. Solutions: Deliver technical engagements with partners and customers. This includes participating in pre-sales visits, understanding customer requirements, creating consulting proposals and creating packaged data analytics service offerings. Delivery: Engagements include projects proving the use of AWS services to support new distributed computing solutions that often span More ❯
and the consultative and leadership skills to launch a project on a trajectory to success? Do you want to be part of the business development team helping to establish Amazon Web Services as a leading technology platform? Do you have the passion to learn and experience technology hands-on? Do you like working in a dynamic environment? Are you … a team player? As a Solution Architect you will be a part of the Business Development organization of Amazon Web Services (AWS), and will have the opportunity to help shape and deliver on a strategy to build mind share and broad use of Amazon's utility computing web services (Amazon EC2, Amazon S3, Amazon DynamoDB … AmazonRedshift, Amazon CloudFront) within strategic different type of customers; from startups brainstorming their first ideas to R&D groups with in larger customers. Your broad responsibilities will include: owning the technical engagement and ultimate success around specific implementation projects, and developing a deep expertise in the AWS technologies as well as broad know-how around how More ❯
AWS Data Engineer London, UK Permanent Strong experience in Python, PySpark, AWS S3, AWS Glue, Databricks, AmazonRedshift, DynamoDB, CI/CD and Terraform. Total 7 + years of experience in Data engineering is required. Design, develop, and optimize ETL pipelines using AWS Glue, Amazon EMR and Kinesis for real-time and batch data processing. Implement data … transformation, streaming, and storage solutions on AWS, ensuring scalability and performance. Collaborate with cross-functional teams to integrate and manage data workflows. Skill-set AmazonRedshift, S3, AWS Glue, Amazon EMR, Kinesis Analytics Ensure data security, compliance, and best practices in cloud data engineering Experience with programming languages such as Python, Java, or Scala. More ❯
Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). Familiarity with AWS and its data services (e.g. S3, Athena, AWS Glue). Familiarity with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Knowledge of containerization and orchestration tools (e.g., Docker, ECS, Kubernetes). Familiarity of data orchestration tools (e.g. Prefect, Apache Airflow). Familiarity with CI/CD More ❯
processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc) Would you like to join us as we work hard, have fun and make history? Apply for this job indicates a More ❯
processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc Would you like to join us as we work hard, have fun and make history? Apply for this job indicates a More ❯
broad range of problems using your technical skills. Demonstrable experience of utilising strong communication and stakeholder management skills when engaging with customers Significant experience with cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle. Experience with big … data technologies such as Hadoop, Spark, or Hive. Familiarity with data warehousing and ETL tools such as AmazonRedshift, Google BigQuery, or Apache Airflow. Proficiency in Python and at least one other programming language such as Java, or Scala. Willingness to mentor more junior members of the team. Strong analytical and problem-solving skills with the ability to More ❯
focus on building scalable data solutions. Experience with data pipeline orchestration tools such as Dagster or similar. Familiarity with cloud platforms (e.g. AWS) and their data services (e.g., S3, Redshift, Snowflake). Understanding of data warehousing concepts and experience with modern warehousing solutions. Experience with GitHub Actions (or similar) and implementing CI/CD pipelines for data workflows and More ❯
skills. Good written and verbal skills, able to translate concepts into easily understood diagrams and visuals for both technical and non-technical people alike. AWS cloud products (Lambda functions, Redshift, S3, AmazonMQ, Kinesis, EMR, RDS (Postgres . Apache Airflow for orchestration. DBT for data transformations. Machine Learning for product insights and recommendations. Experience with microservices using technologies like Docker More ❯
Snowflake (data warehousing and performance tuning) Informatica (ETL/ELT development and orchestration) - nice to have Python (data processing and scripting) - required AWS (data services such as S3, Glue, Redshift, Lambda) - required Cloud data practices and platform - AWS required Basic knowledge of related disciplines such as data science, software engineering, and business analytics. Proven ability to independently resolve complex More ❯
may either leverage third party tools such as Fivetran, Airbyte, Stitch or build custom pipelines. We use the main data warehouses for dbt modelling and have extensive experience with Redshift, BigQuery and Snowflake. Recently we've been rolling out a serverless implementation of dbt and progressing work on internal product to build modular data platforms. When initially working with More ❯
cloud platforms (GCP, AWS, Azure) and their data-specific services Proficiency in Python, SQL, and data orchestration tools (e.g., Airflow, DBT) Experience with modern data warehouse technologies (BigQuery, Snowflake, Redshift, etc.) Strong understanding of data modeling, data governance, and data quality principles Excellent communication skills with the ability to translate complex technical concepts for business stakeholders Strategic thinking with More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
WüNDER TALENT
input into technical decisions, peer reviews and solution design. Requirements Proven experience as a Data Engineer in cloud-first environments. Strong commercial knowledge of AWS services (e.g. S3, Glue, Redshift). Advanced PySpark and Databricks experience (Delta Lake, Unity Catalog, Databricks Jobs etc). Proficient in SQL (T-SQL/SparkSQL) and Python for data transformation and scripting. Hands More ❯
or Kafka Streams)? Select Which statement best describes your hands on responsibility for architecting and tuning cloud native data lake/warehouse solutions (e.g., AWS S3 + Glue/Redshift, GCP BigQuery, Azure Synapse)? Select What best reflects your experience building ETL/ELT workflows with Apache Airflow (or similar) and integrating them into containerised CI/CD pipelines More ❯
NumPy, Pandas, SQL Alchemy) and expert-level SQL across multiple database platforms Hands-on experience with modern data stack tools including dbt, Airflow, and cloud data warehouses (Snowflake, BigQuery, Redshift) Strong understanding of data modelling, schema design, and building maintainable ELT/ETL pipelines Experience with cloud platforms (AWS, Azure, GCP) and infrastructure-as-code practices Familiarity with data More ❯
similar Proficient writing and maintaining bash scripts Experience writing concise and illustrative documentation Experience Microsoft Azure and Google Cloud Experience with Data Engineering and Analytics products such as Snowflake, Redshift, Google Analytics, Segment, ELK Stack Qualifications Bachelor's degree in computer science or equivalent experience combined with theoretical knowledge What's in it For You? Flexibility & Work-Life Balance More ❯
reliable, scalable, and well-tested solutions to automate data ingestion, transformation, and orchestration across systems. Own data operations infrastructure: Manage and optimise key data infrastructure components within AWS, including AmazonRedshift, Apache Airflow for workflow orchestration and other analytical tools. You will be responsible for ensuring the performance, reliability, and scalability of these systems to meet the growing More ❯
Kafka, Spark Streaming, Kinesis) Familiarity with schema design and semi-structured data formats Exposure to containerisation, graph databases, or machine learning concepts Proficiency with cloud-native data tools (BigQuery, Redshift, Snowflake) Enthusiasm for learning and experimenting with new technologies Why Join Capco Deliver high-impact technology solutions for Tier 1 financial institutions Work in a collaborative, flat, and entrepreneurial More ❯
both at the Board/Executive level and at the business unit level. Key Responsibilities Design, develop, and maintain scalable ETL pipelines using technologies like dbt, Airbyte, Cube, DuckDB, Redshift, and Superset Work closely with stakeholders across the company to gather data requirements and setup dashboards Promote a data driven culture at Notabene and train, upskill power-users across More ❯
ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi g Query). Experience with DataOps practices and tools, includin g CI/CD for data pipelines. Excellent leadership, communication, and interpersonal skills More ❯
ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi g Query). Experience with DataOps practices and tools, includin g CI/CD for data pipelines. Excellent leadership, communication, and interpersonal skills More ❯
and experience relating data insight with business problems and creating appropriate dashboards Mandatory required high proficiency in ETL, SQL and database management Experience with AWS services like Glue, Athena, Redshift, Lambda, S3 Python programming experience using data libraries like pandas and numpy etc Interest in machine learning, logistic regression and emerging solutions for data analytics You are comfortable working More ❯