Job ID: Amazon Web Services Australia Pty Ltd Are you a Senior Cloud Architect with GenAI consulting specialist? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake, Data Engineering and GenAI experience? Do you have senior stakeholder engagement experience to support pre-sales and deliver consulting engagements? Do you like to solve … Vetting Agency clearance (see ). Key job responsibilities Expertise: Collaborate with pre-sales and delivery teams to help partners and customers learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), Amazon Elastic Map Reduce (EMR), Amazon Kinesis, AmazonRedshift, Amazon Athena, AWS Lake Formation … Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon Bedrock. Solutions: Support pre-sales and deliver technical engagements with partners and customers. This includes participating in pre-sales visits, understanding customer requirements, creating consulting proposals and creating packaged data analytics service offerings. Delivery: Engagements include projects proving the use of AWS services to support new distributed computing More ❯
Sr. Analytics GTM Specialist, GCR SSO (Service Specialist Orgnization) Job ID: Beijing Century Joyo Information Technology Co., Ltd. Shenzhen Branch Amazon Web Services, an Company, has been the world's leading cloud provider for more than 17 years with the most mature, comprehensive, and broadly adopted cloud platform. We have over 200 fully featured cloud services, managed from … the globe. Millions of customers in over 240 countries - from the fastest growing start-ups to the largest enterprises, through to leading government agencies - all place their trust in Amazon Web Services to power their infrastructure, and deliver innovation. AWS Global Sales drives adoption of the AWS cloud worldwide, enabling customers of all sizes to innovate and expand in … challenges, then craft innovative solutions that accelerate their success. This customer-first approach is how we built the world's most adopted cloud. Join us and help us grow. Amazon Web Services came to China in 2013, and has been relentlessly investing and expanding our infrastructure and business since then. Amazon Web Services launched its China (Beijing) Region More ❯
Job ID: AWS EMEA SARL (Israel Branch) Amazon Web Services is seeking a Go-To-Market Specialist to deliver value for customers via AWS's analytics solutions. As a GTM specialist for this fast growing, exciting space you will have the opportunity to help drive the growth and shape the future of emerging technologies that will have an important … these desired outcomes by partnering with AWS. You will educate customers on "art of the possible" success stories and AWS enablers including our data lake offering (including Apache Iceberg, Amazon Athena, AWS Glue data catalog), data warehousing (AmazonRedshift), BigData processing (Amazon EMR), search (Amazon Open Search Service), ETL (Amazon Glue), streaming (Amazon Kinesis and Amazon Managed Kafka), and Sagemaker Unified Studio brining these all together. Then you will work with a specialized solution architect to craft a solution spanning these solutions as well as selected partner products and manage the sales process including evaluation, pricing, objection handling, navigating competitive pressures, proofs of concept and construction of a supporting business case. More ❯
SQL, and cloud-based data engineering tools Expertise in multiple cloud platforms (AWS, GCP, or Azure) and managing cloud-based data infrastructure Strong background in database technologies (SQL Server, Redshift, PostgreSQL, Oracle) Desirable Skills Familiarity with machine learning pipelines and MLOps practices Additional experience with Databricks and specific AWS such as Glue, S3, Lambda Proficient in Git, CI/ More ❯
processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc) Would you like to join us as we work hard, have fun and make history? Apply for this job indicates a More ❯
Master’s degree in Computer Science, Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version More ❯
Master’s degree in Computer Science, Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version More ❯
Master’s degree in Computer Science, Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version More ❯
Master’s degree in Computer Science, Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version More ❯
london (city of london), south east england, united kingdom
HCLTech
Master’s degree in Computer Science, Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version More ❯
Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). Familiarity with AWS and its data services (e.g. S3, Athena, AWS Glue). Familiarity with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Knowledge of containerization and orchestration tools (e.g., Docker, ECS, Kubernetes). Familiarity of data orchestration tools (e.g. Prefect, Apache Airflow). Familiarity with CI/CD More ❯
processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc Would you like to join us as we work hard, have fun and make history? Apply for this job indicates a More ❯
data lake architectures Advanced proficiency in Python (including PySpark) and SQL, with experience building scalable data pipelines and analytics workflows Strong background in cloud-native data infrastructure (e.g., BigQuery, Redshift, Snowflake, Databricks) Demonstrated ability to lead teams, set technical direction, and collaborate effectively across business and technology functions Desirable skills Familiarity with machine learning pipelines and MLOps practices Additional More ❯
broad range of problems using your technical skills. Demonstrable experience of utilising strong communication and stakeholder management skills when engaging with customers Significant experience with cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle. Experience with big … data technologies such as Hadoop, Spark, or Hive. Familiarity with data warehousing and ETL tools such as AmazonRedshift, Google BigQuery, or Apache Airflow. Proficiency in Python and at least one other programming language such as Java, or Scala. Willingness to mentor more junior members of the team. Strong analytical and problem-solving skills with the ability to More ❯
focus on building scalable data solutions. Experience with data pipeline orchestration tools such as Dagster or similar. Familiarity with cloud platforms (e.g. AWS) and their data services (e.g., S3, Redshift, Snowflake). Understanding of data warehousing concepts and experience with modern warehousing solutions. Experience with GitHub Actions (or similar) and implementing CI/CD pipelines for data workflows and More ❯
Java Experience in a Marketing technical stack and 3rd party tools Broad experience of working within AWS; including infrastructure (VPC, EC2, Security groups, S3 etc) to AWS data platforms (Redshift, RDS, Glue etc). Working Knowledge of infrastructure automation with Terraform. Working knowledge of test tools and frameworks such as Junit, Mockito, ScalaTest, pytest. Working knowledge of build tools More ❯
Snowflake (data warehousing and performance tuning) Informatica (ETL/ELT development and orchestration) - nice to have Python (data processing and scripting) - required AWS (data services such as S3, Glue, Redshift, Lambda) - required Cloud data practices and platform - AWS required Basic knowledge of related disciplines such as data science, software engineering, and business analytics. Proven ability to independently resolve complex More ❯
and big data platforms. Knowledge of data modeling, replication, and query optimization. Hands-on experience with SQL and NoSQL databases is desirable. Familiarity with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) would be beneficial. Data Platform Management: Comfortable operating in hybrid environments (cloud and on-prem). Experience integrating diverse data sources and systems. Understanding of secure data transfer More ❯
solutions. Support and mentor junior engineers, contributing to knowledge sharing across the team. What We're Looking For Strong hands-on experience across AWS Glue, Lambda, Step Functions, RDS, Redshift, and Boto3. Proficient in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building Real Time event More ❯
skills. Good written and verbal skills, able to translate concepts into easily understood diagrams and visuals for both technical and non-technical people alike. AWS cloud products (Lambda functions, Redshift, S3, AmazonMQ, Kinesis, EMR, RDS (Postgres . Apache Airflow for orchestration. DBT for data transformations. Machine Learning for product insights and recommendations. Experience with microservices using technologies like Docker More ❯
Farnborough, Hampshire, South East, United Kingdom
Peregrine
and big data platforms. Knowledge of data modeling, replication, and query optimization. Hands-on experience with SQL and NoSQL databases is desirable. Familiarity with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) would be beneficial. Data Platform Management: Comfortable operating in hybrid environments (cloud and on-prem). Experience integrating diverse data sources and systems. Understanding of secure data transfer More ❯
AWS Data Engineer Location: (Hybrid) London, UK Job Type: Contract Job description Experience with AWS, Python & Azure Databricks is mandatory. Design, develop, and optimize ETL pipelines using AWS Glue, Amazon EMR and Kinesis for real-time and batch data processing. Implement data transformation, streaming, and storage solutions on AWS, ensuring scalability and performance. Collaborate with cross-functional teams to … integrate and manage data workflows. Skill set- AmazonRedshift, S3, AWS Glue, Amazon EMR, Kinesis Analytics Ensure data security, compliance, and best practices in cloud data engineering More ❯
City of London, London, United Kingdom Hybrid / WFH Options
TAGMATIX360
AWS Data Engineer Location: (Hybrid) London, UK Job Type: Contract Job description Experience with AWS, Python & Azure Databricks is mandatory. Design, develop, and optimize ETL pipelines using AWS Glue, Amazon EMR and Kinesis for real-time and batch data processing. Implement data transformation, streaming, and storage solutions on AWS, ensuring scalability and performance. Collaborate with cross-functional teams to … integrate and manage data workflows. Skill set- AmazonRedshift, S3, AWS Glue, Amazon EMR, Kinesis Analytics Ensure data security, compliance, and best practices in cloud data engineering More ❯
london, south east england, united kingdom Hybrid / WFH Options
TAGMATIX360
AWS Data Engineer Location: (Hybrid) London, UK Job Type: Contract Job description Experience with AWS, Python & Azure Databricks is mandatory. Design, develop, and optimize ETL pipelines using AWS Glue, Amazon EMR and Kinesis for real-time and batch data processing. Implement data transformation, streaming, and storage solutions on AWS, ensuring scalability and performance. Collaborate with cross-functional teams to … integrate and manage data workflows. Skill set- AmazonRedshift, S3, AWS Glue, Amazon EMR, Kinesis Analytics Ensure data security, compliance, and best practices in cloud data engineering More ❯
slough, south east england, united kingdom Hybrid / WFH Options
TAGMATIX360
AWS Data Engineer Location: (Hybrid) London, UK Job Type: Contract Job description Experience with AWS, Python & Azure Databricks is mandatory. Design, develop, and optimize ETL pipelines using AWS Glue, Amazon EMR and Kinesis for real-time and batch data processing. Implement data transformation, streaming, and storage solutions on AWS, ensuring scalability and performance. Collaborate with cross-functional teams to … integrate and manage data workflows. Skill set- AmazonRedshift, S3, AWS Glue, Amazon EMR, Kinesis Analytics Ensure data security, compliance, and best practices in cloud data engineering More ❯