Job ID: Amazon Web Services Australia Pty Ltd Are you a Senior Cloud Architect with GenAI consulting specialist? Do you have real-time Data Analytics, Data Warehousing, Big Data, Modern Data Strategy, Data Lake, Data Engineering and GenAI experience? Do you have senior stakeholder engagement experience to support pre-sales and deliver consulting engagements? Do you like to solve … Vetting Agency clearance (see ). Key job responsibilities Expertise: Collaborate with pre-sales and delivery teams to help partners and customers learn and use services such as AWS Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), Amazon Elastic Map Reduce (EMR), Amazon Kinesis, AmazonRedshift, Amazon Athena, AWS Lake Formation … Amazon DataZone, Amazon SageMaker, Amazon Quicksight and Amazon Bedrock. Solutions: Support pre-sales and deliver technical engagements with partners and customers. This includes participating in pre-sales visits, understanding customer requirements, creating consulting proposals and creating packaged data analytics service offerings. Delivery: Engagements include projects proving the use of AWS services to support new distributed computing More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Holland & Barrett International Limited
from BI reporting to customer segmentation and content personalisation, your work will directly influence how H&B delivers for our customers. You'll work with cutting-edge technologies including AmazonRedshift, Matillion and the AWS ecosystem, and you'll be hands-on with SQL, ELT workflows and data modelling. This is an exciting opportunity to make a big … collaborating across product, tech and business teams. What You'll Be Doing Integrating raw data sources into the H&B data environment Building and scaling our data warehouse in AmazonRedshift Designing, building and maintaining ELT workflows Developing data cubes and applications to support BI, marketing and analytics Writing high-quality SQL and ensuring system health across our … years' experience transforming large datasets into analytics and BI solutions (e-commerce experience is a plus) Highly advanced SQL skills Strong experience with cloud data warehouses (AWS Redshift preferred) Experience with at least one programming language (Python preferred) A broad technical skillset and eagerness to learn — from Git and Linux to APIs, Docker, NoSQL and serverless tools Bonus: experience More ❯
Posted Todayjob requisition id: RThe purpose of this role is to oversee the development of our database marketing solutions, using database technologies such as Microsoft SQL Server/Azure, AmazonRedshift, Google BigQuery. The role will be involved in design, specifications, troubleshooting, and issue resolution. The ability to communicate to both technical and non-technical audiences is key. … visit . Role Purpose The purpose of this role is to oversee the development of our database marketing solutions, using database technologies such as Microsoft SQL Server/Azure, AmazonRedshift, Google BigQuery. The role will be involved in design, specifications, troubleshooting, and issue resolution. The ability to communicate to both technical and non-technical audiences is key. More ❯
processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc) Would you like to join us as we work hard, have fun and make history? Apply for this job indicates a More ❯
modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake schema designs. Extensive experience working with AWS Redshift and Aurora for data warehousing and transactional workloads. Experience using dbt (Data Build Tool) for building modular, version-controlled, and testable data transformation workflows, with a strong understanding of More ❯
modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake schema designs. Extensive experience working with AWS Redshift and Aurora for data warehousing and transactional workloads. Experience using dbt (Data Build Tool) for building modular, version-controlled, and testable data transformation workflows, with a strong understanding of More ❯
modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake schema designs. Extensive experience working with AWS Redshift and Aurora for data warehousing and transactional workloads. Experience using dbt (Data Build Tool) for building modular, version-controlled, and testable data transformation workflows, with a strong understanding of More ❯
modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake schema designs. Extensive experience working with AWS Redshift and Aurora for data warehousing and transactional workloads. Experience using dbt (Data Build Tool) for building modular, version-controlled, and testable data transformation workflows, with a strong understanding of More ❯
london (city of london), south east england, united kingdom
HCLTech
modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake schema designs. Extensive experience working with AWS Redshift and Aurora for data warehousing and transactional workloads. Experience using dbt (Data Build Tool) for building modular, version-controlled, and testable data transformation workflows, with a strong understanding of More ❯
Nice to Haves": • Certification in dbt or Google Cloud Platform or related technologies. • Experience with other cloud platforms (eg AWS, Azure, Snowflake) and data warehouse/lakehouse technologies (eg Redshift, Databricks, Synapse) • Knowledge of distributed big data technologies. • Proficiency in Python. • Familiarity with data governance and compliance frameworks.Your characteristics as a Consultant will include: • Driven by delivering quality work More ❯
data lake architectures Advanced proficiency in Python (including PySpark) and SQL, with experience building scalable data pipelines and analytics workflows Strong background in cloud-native data infrastructure (e.g., BigQuery, Redshift, Snowflake, Databricks) Demonstrated ability to lead teams, set technical direction, and collaborate effectively across business and technology functions Desirable skills Familiarity with machine learning pipelines and MLOps practices Additional More ❯
practices and contribute to team knowledge. Required Skills 3+ years in a data engineering role. Proficient in SQL and Python . Strong experience with AWS services (e.g., Lambda, Glue, Redshift, S3). Solid understanding of data warehousing and modelling: star/snowflake schema Familiarity with Git , CI/CD pipelines, and containerisation (e.g., Docker). Ability to troubleshoot BI More ❯
practices and contribute to team knowledge. Required Skills 3+ years in a data engineering role. Proficient in SQL and Python . Strong experience with AWS services (e.g., Lambda, Glue, Redshift, S3). Solid understanding of data warehousing and modelling: star/snowflake schema Familiarity with Git , CI/CD pipelines, and containerisation (e.g., Docker). Ability to troubleshoot BI More ❯
Proven experience as a Data Architect or Lead Data Engineer in AWS environments * Deep understanding of cloud-native data services: S3, Redshift, Glue, Athena, EMR, Kinesis, Lambda * Strong hands-on expertise in data modelling, distributed systems, and pipeline orchestration (Airflow, Step Functions) * Background in energy, trading, or financial markets is a strong plus * Excellent knowledge of Python, SQL, and More ❯
Proven experience as a Data Architect or Lead Data Engineer in AWS environments* Deep understanding of cloud-native data services: S3, Redshift, Glue, Athena, EMR, Kinesis, Lambda* Strong hands-on expertise in data modelling, distributed systems, and pipeline orchestration (Airflow, Step Functions)* Background in energy, trading, or financial markets is a strong plus* Excellent knowledge of Python, SQL, and More ❯
Greater Leeds Area, United Kingdom Hybrid / WFH Options
Corecom Consulting
development projects, data modelling, and cloud data platform deployments. Mentor data engineers and contribute to best practices across the team. Key skills: Strong experience with AWS data services (S3, Redshift, Glue, Lambda, Lake Formation, CloudFormation). Proficiency in SQL, Python (Pandas), Spark or Iceberg. Experience with data warehouse design, ETL/ELT, and CI/CD pipelines (GitHub, CodeBuild More ❯
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Fruition Group
engineers and support their growth. Implement best practices for data security and compliance. Collaborate with stakeholders and external partners. Skills & Experience: Strong experience with AWS data technologies (e.g., S3, Redshift, Lambda). Proficient in Python, Apache Spark, and SQL. Experience in data warehouse design and data migration projects. Cloud data platform development and deployment. Expertise across data warehouse and More ❯
solving skills and ability to explain technical ideas to non technical audiences. Experience with real time data pipelines, event driven architectures (Kafka/Kinesis), or modern data warehouses (Snowflake, Redshift, BigQuery) is a plus. Our Benefits: Paid Vacation Days Health insurance Commuter benefit Employee Stock Purchase Plan (ESPP) Mental Health & Family Forming Benefits Continuing education and corridor travel benefits More ❯
Engineers and Associates, and lead technical discussions and design sessions. Key requirements: Must-Have: Strong experience with AWS services: Glue, Lambda, S3, Athena, Step Functions, EventBridge, EMR, EKS, RDS, Redshift, DynamoDB. Strong Python development skills. Proficient with Docker , containerization, and virtualization. Hands-on experience with CI/CD , especially GitLab CI. Solid experience with Infrastructure as Code (Terraform, CloudFormation More ❯
Proficiency in SQL, Python, and ETL tools ( Streamsets, DBT etc.) Hands on experience with Oracle RDBMS Data Migration experience to Snowflake Experience with AWS services such as S3, Lambda, Redshift, and Glue. Strong understanding of data warehousing concepts and data modelling. Excellent problem-solving and communication skills, with a focus on delivering high-quality solutions. Understanding/hands on More ❯
Requirements Proven experience as a Data Engineer, with a track record of designing and managing scalable data pipelines and infrastructure in AWS. Strong expertise with AWS services, including Glue, Redshift, Data Catalog, and large-scale data storage solutions such as data lakes. Proficiency in ETL/ELT tools (e.g. Apache Spark, Airflow, dbt). Skilled in data processing languages More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
The Citation Group
Programming experience in Python Skills and experience we’d love you to have... Understanding of cloud computing security concepts Experience in relational cloud based database technologies like Snowflake, BigQuery, Redshift Experience in open source technologies like Spark, Kafka, Beam Good understanding of Cloud providers – AWS, Microsoft Azure, Google Cloud Familiarity with DBT, Delta Lake, Databricks Experience working in an More ❯
warrington, cheshire, north west england, united kingdom Hybrid / WFH Options
The Citation Group
Programming experience in Python Skills and experience we’d love you to have... Understanding of cloud computing security concepts Experience in relational cloud based database technologies like Snowflake, BigQuery, Redshift Experience in open source technologies like Spark, Kafka, Beam Good understanding of Cloud providers – AWS, Microsoft Azure, Google Cloud Familiarity with DBT, Delta Lake, Databricks Experience working in an More ❯
or Kafka Streams)? Select Which statement best describes your hands on responsibility for architecting and tuning cloud native data lake/warehouse solutions (e.g., AWS S3 + Glue/Redshift, GCP BigQuery, Azure Synapse)? Select What best reflects your experience building ETL/ELT workflows with Apache Airflow (or similar) and integrating them into containerised CI/CD pipelines More ❯
data governance, data quality, and compliance standards. Strong analytical and problem-solving skills with attention to detail. Excellent communication and documentation abilities. Preferred: Exposure to cloud data platforms (AWS Redshift, Azure Synapse, Google BigQuery, Snowflake). Knowledge of Python, R, or other scripting languages for data manipulation. Experience in banking, financial services, or enterprise IT environments. More ❯