Responsibilities will include: Design, build, and maintain robust, scalable, and secure data pipelines using AWS services and Apache Spark. Develop and optimize data models for reporting and analytics in Redshift and other DWH platforms. Collaborate with Data Scientists, Analysts, and Business Stakeholders to understand data requirements and deliver clean, validated datasets. Monitor, troubleshoot, and optimize ETL/ELT workflows … using cloud platform technologies, alongside experience with a variety of database technologies including Oracle, Postgres and MSSQLServer; Strong expertise in AWS services including AWS DMS, S3, Lambda, Glue, EMR, Redshift, and IAM. Proficient in Apache Spark (batch and/or streaming) and big data processing. Solid experience with SQL and performance tuning in data warehouse environments. Hands-on experience … with AmazonRedshift or equivalent, including table design, workload management, and implementing Redshift Spectrum. Experience building ETL/ELT pipelines using tools like AWS Glue, EMR, or custom frameworks. Familiarity with data modeling concepts. Excellent problem-solving and communication skills. Proficiency in Java and data pipeline development. Familiarity with version control systems (e.g., Git) and agile development More ❯
+ Bonus & Excellent Benefits | Key Responsibilities: Design, develop, and implement advanced data pipelines and ETL/ELT workflows using cloud-native services such as AWS Glue, Lambda, S3, Redshift, and EMR. Act as a technical authority in cloud data engineering by mentoring colleagues and promoting best practices. Collaborate cross-functionally with analysts, data scientists, and business stakeholders to translate More ❯
years of experience in data engineering roles; Strong proficiency with SQL and data modelling concepts; Experience with dbt or similar transformation frameworks; Hands-on experience with AWS data services (Redshift, S3, Glue, Lambda); Proficiency with Python for data processing and pipeline development; Experience with workflow orchestration tools like Airflow or similar; Knowledge of streaming data technologies such as Kafka More ❯
Functions, and Kinesis. Work with structured and unstructured data from multiple sources, ensuring efficient data ingestion, transformation, and storage. Develop and optimize data lake and data warehouse solutions using Amazon S3, Redshift, Athena, and Lake Formation. Implement data governance, security, and compliance best practices, including IAM roles, encryption, and access controls. Monitor and optimize performance of data workflows … data engineering with a strong focus on AWS cloud technologies. Proficiency in Python, PySpark, SQL, and AWS Glue for ETL development. Hands-on experience with AWS data services, including Redshift, Athena, Glue, EMR, and Kinesis. Strong knowledge of data modeling, warehousing, and schema design. Experience with event-driven architectures, streaming data, and real-time processing using Kafka or Kinesis. More ❯
will assist clients in choosing a platform, defining their data needs and migrating them to a modern cloud data environment using cloud providers such as Azure, Google Cloud Platform, Amazon Web Services, Snowflake, Databricks or Teradata. To really stand out and make us fit for the future in a constantly changing world, each and every one of us at … Implementing cloud data architecture and data integration patterns for one or more of the cloud providers (AWS Glue, Azure Data Factory, Event Hub, Databricks,Snowflake etc.), storage and processing (Redshift, Azure Synapse, BigQuery, Snowflake); Infrastructure as code (CloudFormation, Terraform); Understanding and thorough knowledge of Data Warehousing concepts (normalization, OLAP, OLTP, Vault data model, graphs, star & snowflake schemas); Applying knowledge More ❯
and integrity through validation, testing, and monitoring Implement data security and compliance measures in accordance with organizational policies Utilize AWS services such as S3 and RDS (potentially Glue and Redshift) for data storage and processing Develop APIs and interfaces to facilitate data access and integration Participate in Agile development processes, contributing to sprint planning and reviews Document data engineering … in programming languages such as Python, Java, or Scala. Strong experience with SQL and relational databases, particularly PostgreSQL Hands-on experience with AWS data services (e.g., S3, RDS, Glue, Redshift) Familiarity with data modeling, ETL development, and data warehousing concepts Experience with API development and integration Knowledge of data security and compliance standards Ability to work collaboratively in a More ❯
London, England, United Kingdom Hybrid / WFH Options
Capgemini
triage and prioritise platform capabilities to deliver business and customer outcomes. AWS Data Product Development: Lead the development of cloud-native data products using AWS services such as S3, Redshift, Glue, Lambda, and DynamoDB, aligning with business needs. Technical Leadership & Advisory: Act as a trusted AWS expert, advising clients on cloud migration, data strategy, and architecture modernization while working … technical expertise in AWS – Proven experience in designing and implementing cloud-based data architectures, ETL pipelines, and cloud-native applications, using services such as AWS EC2, S3, Lambda, Glue, Redshift, RDS, IAM, and KMS. Leadership & Stakeholder Management – Ability to engage with C-level executives (CDOs, CTOs, CIOs), lead cross-functional teams, and drive technical strategy in complex enterprise environments. More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Capgemini
triage and prioritise platform capabilities to deliver business and customer outcomes. AWS Data Product Development: Lead the development of cloud-native data products using AWS services such as S3, Redshift, Glue, Lambda, and DynamoDB, aligning with business needs. Technical Leadership & Advisory: Act as a trusted AWS expert, advising clients on cloud migration, data strategy, and architecture modernization while working … technical expertise in AWS – Proven experience in designing and implementing cloud-based data architectures, ETL pipelines, and cloud-native applications, using services such as AWS EC2, S3, Lambda, Glue, Redshift, RDS, IAM, and KMS. Leadership & Stakeholder Management – Ability to engage with C-level executives (CDOs, CTOs, CIOs), lead cross-functional teams, and drive technical strategy in complex enterprise environments. More ❯
Glasgow, Scotland, United Kingdom Hybrid / WFH Options
Capgemini
triage and prioritise platform capabilities to deliver business and customer outcomes. AWS Data Product Development: Lead the development of cloud-native data products using AWS services such as S3, Redshift, Glue, Lambda, and DynamoDB, aligning with business needs. Technical Leadership & Advisory: Act as a trusted AWS expert, advising clients on cloud migration, data strategy, and architecture modernization while working … technical expertise in AWS – Proven experience in designing and implementing cloud-based data architectures, ETL pipelines, and cloud-native applications, using services such as AWS EC2, S3, Lambda, Glue, Redshift, RDS, IAM, and KMS. Leadership & Stakeholder Management – Ability to engage with C-level executives (CDOs, CTOs, CIOs), lead cross-functional teams, and drive technical strategy in complex enterprise environments. More ❯
with data privacy regulations. Technical Competencies The role is a hands-on technical leadership role with advanced experience in at least most of the following technologies Cloud Platforms: AWS (Amazon Web Services): Knowledge of services like S3, EC2, Lambda, RDS, Redshift, EMR, SageMaker, Glue, and Kinesis. Azure: Proficiency in services like Azure Blob Storage, Azure Data Lake, VMs … Formation, Azure Purview. Data Security Tools: AWS Key Management Service (KMS), Azure Key Vault. Data Analytics & BI: Visualization Tools: Tableau, Power BI, Looker, and Grafana. Analytics Services: AWS Athena, Amazon QuickSight, Azure Stream Analytics. Development & Collaboration Tools: Version Control: Git (and platforms like GitHub, GitLab). CI/CD Tools: Jenkins, Travis CI, AWS CodePipeline, Azure DevOps. Other Key More ❯
warrington, cheshire, north west england, united kingdom Hybrid / WFH Options
The Citation Group
stack with robust, pragmatic solutions. Responsibilities develop, and maintain ETL/ELT data pipelines using AWS services Data, Databricks and dbt Manage and optimize data storage solutions such as Amazon S3, Redshift, RDS, and DynamoDB. Implement and manage infrastructure-as-code (IaC) using tools like Terraform or AWS CloudFormation. Monitor and optimize the performance, cost, and scalability of More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
stack with robust, pragmatic solutions. Responsibilities develop, and maintain ETL/ELT data pipelines using AWS services Data, Databricks and dbt Manage and optimize data storage solutions such as Amazon S3, Redshift, RDS, and DynamoDB. Implement and manage infrastructure-as-code (IaC) using tools like Terraform or AWS CloudFormation. Monitor and optimize the performance, cost, and scalability of More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
The Citation Group
stack with robust, pragmatic solutions. Responsibilities develop, and maintain ETL/ELT data pipelines using AWS services Data, Databricks and dbt Manage and optimize data storage solutions such as Amazon S3, Redshift, RDS, and DynamoDB. Implement and manage infrastructure-as-code (IaC) using tools like Terraform or AWS CloudFormation. Monitor and optimize the performance, cost, and scalability of More ❯
Strong knowledge of SQL and relational databases (e.g., MySQL, PostgreSQL, MS SQL Server). Experience with NoSQL databases (e.g., MongoDB, Cassandra, HBase). Familiarity with data warehousing solutions (e.g., AmazonRedshift, Google BigQuery, Snowflake). Hands-on experience with ETL frameworks and tools (e.g., Apache NiFi, Talend, Informatica, Airflow). Knowledge of big data technologies (e.g., Hadoop, Apache More ❯
such as Airflow. Experience with low/no-code pipeline development tools such as Talend or SnapLogic. Experience developing data pipelines using cloud services (AWS preferred) like Lambda, S3, Redshift, Glue, Athena, Secrets Manager or equivalent services. Experience of working with APIs for data extraction and interacting with cloud resources via APIs/CLIs/SDKs (e.g. boto3). … Experience building out a data warehouse on platforms such as Redshift, Snowflake, or Databricks. Comfortable working with Git for source control (in Azure DevOps repos or equivalent). Experience working in an Agile (Scrum) environment for product delivery using Azure DevOps or similar tools. Strong problem-solving abilities with the capability to quickly analyse issues and locate performance bottle More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
business. Key Responsibilities Design, build, and maintain robust data pipelines using AWS services (Glue, Lambda, Step Functions, S3, etc.) Develop and optimize data lake and data warehouse solutions using Redshift, Athena, and related technologies Collaborate with data scientists, analysts, and business stakeholders to understand data requirements Ensure data quality, governance, and compliance with financial regulations Implement CI/CD More ❯
the team. Drive the design development and implementation of complex data pipelines and ETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data lineage documentation. Collaborate with data analysts data scientists business stakeholders and product owners to understand More ❯
tenant, data-heavy systems, ideally in a startup or fast-moving environment. Technical Stack : Languages/Tools: Python (REST API integrations), DBT, Airbyte, GitHub Actions Modern Data Warehousing: Snowflake, Redshift, Databricks, or BigQuery. Cloud & Infra: AWS (ECS, S3, Step Functions), Docker (Kubernetes or Fargate a bonus) Data Modelling : Strong grasp of transforming structured/unstructured data into usable models More ❯
tenant, data-heavy systems, ideally in a startup or fast-moving environment. Technical Stack : Languages/Tools: Python (REST API integrations), DBT, Airbyte, GitHub Actions Modern Data Warehousing: Snowflake, Redshift, Databricks, or BigQuery. Cloud & Infra: AWS (ECS, S3, Step Functions), Docker (Kubernetes or Fargate a bonus) Data Modelling : Strong grasp of transforming structured/unstructured data into usable models More ❯
tenant, data-heavy systems, ideally in a startup or fast-moving environment. Technical Stack : Languages/Tools: Python (REST API integrations), DBT, Airbyte, GitHub Actions Modern Data Warehousing: Snowflake, Redshift, Databricks, or BigQuery. Cloud & Infra: AWS (ECS, S3, Step Functions), Docker (Kubernetes or Fargate a bonus) Data Modelling : Strong grasp of transforming structured/unstructured data into usable models More ❯
tenant, data-heavy systems, ideally in a startup or fast-moving environment. Technical Stack : Languages/Tools: Python (REST API integrations), DBT, Airbyte, GitHub Actions Modern Data Warehousing: Snowflake, Redshift, Databricks, or BigQuery. Cloud & Infra: AWS (ECS, S3, Step Functions), Docker (Kubernetes or Fargate a bonus) Data Modelling : Strong grasp of transforming structured/unstructured data into usable models More ❯
and lead technical discussions and design sessions. Key Requirements Why Holland & Barrett? Must-Have: Strong experience with AWS services: Glue, Lambda, S3, Athena, Step Functions, EventBridge, EMR, EKS, RDS, Redshift, DynamoDB. Strong Python development skills. Proficient with Docker, containerization, and virtualization. Hands-on experience with CI/CD, especially GitLab CI. Solid experience with Infrastructure as Code (Terraform, CloudFormation More ❯
processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc) Would you like to join us as we work hard, have fun and make history? Apply for this job indicates a More ❯
to design innovative data solutions that address complex business requirements and drive decision-making. Your skills and experience Proficiency with AWS Tools: Demonstrable experience using AWS Glue, AWS Lambda, Amazon Kinesis, Amazon EMR , Amazon Athena, Amazon DynamoDB, Amazon Cloudwatch, Amazon SNS and AWS Step Functions. Programming Skills: Strong experience with modern programming languages such … as Python, Java, and Scala. Expertise in Data Storage Technologies: In-depth knowledge of Data Warehouse, Database technologies, and Big Data Eco-system technologies such as AWS Redshift, AWS RDS, and Hadoop. Experience with AWS Data Lakes: Proven experience working with AWS data lakes on AWS S3 to store and process both structured and unstructured data sets. To be More ❯