Job ID: Amazon Web Services Australia Pty Ltd Are you a Senior Cloud Architect with GenAI consulting specialist? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake, Data Engineering and GenAI experience? Do you have senior stakeholder engagement experience to support pre-sales and deliver consulting engagements? Do you like to solve … Vetting Agency clearance (see ). Key job responsibilities Expertise: Collaborate with pre-sales and delivery teams to help partners and customers learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), Amazon Elastic Map Reduce (EMR), Amazon Kinesis, AmazonRedshift, Amazon Athena, AWS Lake Formation … Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon Bedrock. Solutions: Support pre-sales and deliver technical engagements with partners and customers. This includes participating in pre-sales visits, understanding customer requirements, creating consulting proposals and creating packaged data analytics service offerings. Delivery: Engagements include projects proving the use of AWS services to support new distributed computing More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Holland & Barrett International Limited
from BI reporting to customer segmentation and content personalisation, your work will directly influence how H&B delivers for our customers. You'll work with cutting-edge technologies including AmazonRedshift, Matillion and the AWS ecosystem, and you'll be hands-on with SQL, ELT workflows and data modelling. This is an exciting opportunity to make a big … collaborating across product, tech and business teams. What You'll Be Doing Integrating raw data sources into the H&B data environment Building and scaling our data warehouse in AmazonRedshift Designing, building and maintaining ELT workflows Developing data cubes and applications to support BI, marketing and analytics Writing high-quality SQL and ensuring system health across our … years' experience transforming large datasets into analytics and BI solutions (e-commerce experience is a plus) Highly advanced SQL skills Strong experience with cloud data warehouses (AWS Redshift preferred) Experience with at least one programming language (Python preferred) A broad technical skillset and eagerness to learn — from Git and Linux to APIs, Docker, NoSQL and serverless tools Bonus: experience More ❯
Posted Todayjob requisition id: RThe purpose of this role is to oversee the development of our database marketing solutions, using database technologies such as Microsoft SQL Server/Azure, AmazonRedshift, Google BigQuery. The role will be involved in design, specifications, troubleshooting, and issue resolution. The ability to communicate to both technical and non-technical audiences is key. … visit . Role Purpose The purpose of this role is to oversee the development of our database marketing solutions, using database technologies such as Microsoft SQL Server/Azure, AmazonRedshift, Google BigQuery. The role will be involved in design, specifications, troubleshooting, and issue resolution. The ability to communicate to both technical and non-technical audiences is key. More ❯
processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc) Would you like to join us as we work hard, have fun and make history? Apply for this job indicates a More ❯
modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake schema designs. Extensive experience working with AWS Redshift and Aurora for data warehousing and transactional workloads. Experience using dbt (Data Build Tool) for building modular, version-controlled, and testable data transformation workflows, with a strong understanding of More ❯
modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake schema designs. Extensive experience working with AWS Redshift and Aurora for data warehousing and transactional workloads. Experience using dbt (Data Build Tool) for building modular, version-controlled, and testable data transformation workflows, with a strong understanding of More ❯
modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake schema designs. Extensive experience working with AWS Redshift and Aurora for data warehousing and transactional workloads. Experience using dbt (Data Build Tool) for building modular, version-controlled, and testable data transformation workflows, with a strong understanding of More ❯
london (city of london), south east england, united kingdom
HCLTech
modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake schema designs. Extensive experience working with AWS Redshift and Aurora for data warehousing and transactional workloads. Experience using dbt (Data Build Tool) for building modular, version-controlled, and testable data transformation workflows, with a strong understanding of More ❯
data lake architectures Advanced proficiency in Python (including PySpark) and SQL, with experience building scalable data pipelines and analytics workflows Strong background in cloud-native data infrastructure (e.g., BigQuery, Redshift, Snowflake, Databricks) Demonstrated ability to lead teams, set technical direction, and collaborate effectively across business and technology functions Desirable skills Familiarity with machine learning pipelines and MLOps practices Additional More ❯
practices and contribute to team knowledge. Required Skills 3+ years in a data engineering role. Proficient in SQL and Python . Strong experience with AWS services (e.g., Lambda, Glue, Redshift, S3). Solid understanding of data warehousing and modelling: star/snowflake schema Familiarity with Git , CI/CD pipelines, and containerisation (e.g., Docker). Ability to troubleshoot BI More ❯
practices and contribute to team knowledge. Required Skills 3+ years in a data engineering role. Proficient in SQL and Python . Strong experience with AWS services (e.g., Lambda, Glue, Redshift, S3). Solid understanding of data warehousing and modelling: star/snowflake schema Familiarity with Git , CI/CD pipelines, and containerisation (e.g., Docker). Ability to troubleshoot BI More ❯
Proven experience as a Data Architect or Lead Data Engineer in AWS environments * Deep understanding of cloud-native data services: S3, Redshift, Glue, Athena, EMR, Kinesis, Lambda * Strong hands-on expertise in data modelling, distributed systems, and pipeline orchestration (Airflow, Step Functions) * Background in energy, trading, or financial markets is a strong plus * Excellent knowledge of Python, SQL, and More ❯
Proven experience as a Data Architect or Lead Data Engineer in AWS environments* Deep understanding of cloud-native data services: S3, Redshift, Glue, Athena, EMR, Kinesis, Lambda* Strong hands-on expertise in data modelling, distributed systems, and pipeline orchestration (Airflow, Step Functions)* Background in energy, trading, or financial markets is a strong plus* Excellent knowledge of Python, SQL, and More ❯
solving skills and ability to explain technical ideas to non technical audiences. Experience with real time data pipelines, event driven architectures (Kafka/Kinesis), or modern data warehouses (Snowflake, Redshift, BigQuery) is a plus. Our Benefits: Paid Vacation Days Health insurance Commuter benefit Employee Stock Purchase Plan (ESPP) Mental Health & Family Forming Benefits Continuing education and corridor travel benefits More ❯
Engineers and Associates, and lead technical discussions and design sessions. Key requirements: Must-Have: Strong experience with AWS services: Glue, Lambda, S3, Athena, Step Functions, EventBridge, EMR, EKS, RDS, Redshift, DynamoDB. Strong Python development skills. Proficient with Docker , containerization, and virtualization. Hands-on experience with CI/CD , especially GitLab CI. Solid experience with Infrastructure as Code (Terraform, CloudFormation More ❯
or Kafka Streams)? Select Which statement best describes your hands on responsibility for architecting and tuning cloud native data lake/warehouse solutions (e.g., AWS S3 + Glue/Redshift, GCP BigQuery, Azure Synapse)? Select What best reflects your experience building ETL/ELT workflows with Apache Airflow (or similar) and integrating them into containerised CI/CD pipelines More ❯
data governance, data quality, and compliance standards. Strong analytical and problem-solving skills with attention to detail. Excellent communication and documentation abilities. Preferred: Exposure to cloud data platforms (AWS Redshift, Azure Synapse, Google BigQuery, Snowflake). Knowledge of Python, R, or other scripting languages for data manipulation. Experience in banking, financial services, or enterprise IT environments. More ❯
data governance, data quality, and compliance standards. Strong analytical and problem-solving skills with attention to detail. Excellent communication and documentation abilities. Preferred: Exposure to cloud data platforms (AWS Redshift, Azure Synapse, Google BigQuery, Snowflake). Knowledge of Python, R, or other scripting languages for data manipulation. Experience in banking, financial services, or enterprise IT environments. More ❯
models, not dashboarding. YOUR SKILLS & EXPERIENCE The successful candidate will have: Strong SQL and DBT experience. Familiarity with Kimball methodology, dimensional modelling, and star schema design. Proven experience with Redshift or Snowflake. Strong background in cloud-based data environments (AWS preferred). Hands-on experience with Airflow for orchestration. (Nice-to-have) Python for data engineering tasks. (Nice-to More ❯
platforms, preferably Azure (AWS, GCP experience is also valuable). Experience managing large and complex datasets, with a strong command of SQL for cloud-based environments (Fabric, Snowflake, BigQuery, Redshift, etc.). A solid understanding of data modelling techniques (star schema, data vault, dimensional modelling). Proficiency in Excel-based data workflows for various Agile Retail projects. Hands-on More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Office Angels
platforms, preferably Azure (AWS, GCP experience is also valuable). Experience managing large and complex datasets, with a strong command of SQL for cloud-based environments (Fabric, Snowflake, BigQuery, Redshift, etc.). A solid understanding of data modelling techniques (star schema, data vault, dimensional modelling). Proficiency in Excel-based data workflows for various Agile Retail projects. Hands-on More ❯
City of London, London, United Kingdom Hybrid / WFH Options
McGregor Boyall Associates Limited
What We're Looking For Hands-on Palantir Foundry expertise or transferable, client facing data engineering experience with large-scale data platforms such as: Snowflake, Databricks, AWS Glue/Redshift, Google BigQuery Software engineering skills in Python, Java, or TypeScript/React. Strong data modelling, pipeline development, and API design experience. Excellent problem-solving and communication skills. Why This More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
What We're Looking For Hands-on Palantir Foundry expertise or transferable, client facing data engineering experience with large-scale data platforms such as: Snowflake, Databricks, AWS Glue/Redshift, Google BigQuery Software engineering skills in Python, Java, or TypeScript/React. Strong data modelling, pipeline development, and API design experience. Excellent problem-solving and communication skills. Why This More ❯
the platform post-deployment About You Requirements Minimum of 8 years' experience in data engineering within cloud environments, preferably AWS, including use of services such as S3, Lambda, Glue, Redshift, and IAM Must be eligible for SC Clearance A strong understanding of secure data practices in government or similarly regulated environments Practical experience building and managing Python-based analytical More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Awin
acceptable) Working knowledge of SQL and data modelling concepts Experience with BI tools (e.g., Power BI, Looker, Tableau) Familiarity with cloud data platforms such as Snowflake, BigQuery, or AWS Redshift Understanding of modern data architecture and APIs Our Offer Flexi-Week and Work-Life Balance: We prioritise your mental health and wellbeing, offering you a flexible four-day Flexi More ❯