Responsibilities will include: Design, build, and maintain robust, scalable, and secure data pipelines using AWS services and Apache Spark. Develop and optimize data models for reporting and analytics in Redshift and other DWH platforms. Collaborate with Data Scientists, Analysts, and Business Stakeholders to understand data requirements and deliver clean, validated datasets. Monitor, troubleshoot, and optimize ETL/ELT workflows … using cloud platform technologies, alongside experience with a variety of database technologies including Oracle, Postgres and MSSQLServer; Strong expertise in AWS services including AWS DMS, S3, Lambda, Glue, EMR, Redshift, and IAM. Proficient in Apache Spark (batch and/or streaming) and big data processing. Solid experience with SQL and performance tuning in data warehouse environments. Hands-on experience … with AmazonRedshift or equivalent, including table design, workload management, and implementing Redshift Spectrum. Experience building ETL/ELT pipelines using tools like AWS Glue, EMR, or custom frameworks. Familiarity with data modeling concepts. Excellent problem-solving and communication skills. Proficiency in Java and data pipeline development. Familiarity with version control systems (e.g., Git) and agile development More ❯
+ Bonus & Excellent Benefits | Key Responsibilities: Design, develop, and implement advanced data pipelines and ETL/ELT workflows using cloud-native services such as AWS Glue, Lambda, S3, Redshift, and EMR. Act as a technical authority in cloud data engineering by mentoring colleagues and promoting best practices. Collaborate cross-functionally with analysts, data scientists, and business stakeholders to translate More ❯
Functions, and Kinesis. Work with structured and unstructured data from multiple sources, ensuring efficient data ingestion, transformation, and storage. Develop and optimize data lake and data warehouse solutions using Amazon S3, Redshift, Athena, and Lake Formation. Implement data governance, security, and compliance best practices, including IAM roles, encryption, and access controls. Monitor and optimize performance of data workflows … data engineering with a strong focus on AWS cloud technologies. Proficiency in Python, PySpark, SQL, and AWS Glue for ETL development. Hands-on experience with AWS data services, including Redshift, Athena, Glue, EMR, and Kinesis. Strong knowledge of data modeling, warehousing, and schema design. Experience with event-driven architectures, streaming data, and real-time processing using Kafka or Kinesis. More ❯
London, England, United Kingdom Hybrid / WFH Options
Capgemini
triage and prioritise platform capabilities to deliver business and customer outcomes. AWS Data Product Development: Lead the development of cloud-native data products using AWS services such as S3, Redshift, Glue, Lambda, and DynamoDB, aligning with business needs. Technical Leadership & Advisory: Act as a trusted AWS expert, advising clients on cloud migration, data strategy, and architecture modernization while working … technical expertise in AWS – Proven experience in designing and implementing cloud-based data architectures, ETL pipelines, and cloud-native applications, using services such as AWS EC2, S3, Lambda, Glue, Redshift, RDS, IAM, and KMS. Leadership & Stakeholder Management – Ability to engage with C-level executives (CDOs, CTOs, CIOs), lead cross-functional teams, and drive technical strategy in complex enterprise environments. More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Capgemini
triage and prioritise platform capabilities to deliver business and customer outcomes. AWS Data Product Development: Lead the development of cloud-native data products using AWS services such as S3, Redshift, Glue, Lambda, and DynamoDB, aligning with business needs. Technical Leadership & Advisory: Act as a trusted AWS expert, advising clients on cloud migration, data strategy, and architecture modernization while working … technical expertise in AWS – Proven experience in designing and implementing cloud-based data architectures, ETL pipelines, and cloud-native applications, using services such as AWS EC2, S3, Lambda, Glue, Redshift, RDS, IAM, and KMS. Leadership & Stakeholder Management – Ability to engage with C-level executives (CDOs, CTOs, CIOs), lead cross-functional teams, and drive technical strategy in complex enterprise environments. More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
The Citation Group
stack with robust, pragmatic solutions. Responsibilities develop, and maintain ETL/ELT data pipelines using AWS services Data, Databricks and dbt Manage and optimize data storage solutions such as Amazon S3, Redshift, RDS, and DynamoDB. Implement and manage infrastructure-as-code (IaC) using tools like Terraform or AWS CloudFormation. Monitor and optimize the performance, cost, and scalability of More ❯
with data privacy regulations. Technical Competencies The role is a hands-on technical leadership role with advanced experience in at least most of the following technologies Cloud Platforms: AWS (Amazon Web Services): Knowledge of services like S3, EC2, Lambda, RDS, Redshift, EMR, SageMaker, Glue, and Kinesis. Azure: Proficiency in services like Azure Blob Storage, Azure Data Lake, VMs … Formation, Azure Purview. Data Security Tools: AWS Key Management Service (KMS), Azure Key Vault. Data Analytics & BI: Visualization Tools: Tableau, Power BI, Looker, and Grafana. Analytics Services: AWS Athena, Amazon QuickSight, Azure Stream Analytics. Development & Collaboration Tools: Version Control: Git (and platforms like GitHub, GitLab). CI/CD Tools: Jenkins, Travis CI, AWS CodePipeline, Azure DevOps. Other Key More ❯
Warrington, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
stack with robust, pragmatic solutions. Responsibilities develop, and maintain ETL/ELT data pipelines using AWS services Data, Databricks and dbt Manage and optimize data storage solutions such as Amazon S3, Redshift, RDS, and DynamoDB. Implement and manage infrastructure-as-code (IaC) using tools like Terraform or AWS CloudFormation. Monitor and optimize the performance, cost, and scalability of More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
The Citation Group
stack with robust, pragmatic solutions. Responsibilities develop, and maintain ETL/ELT data pipelines using AWS services Data, Databricks and dbt Manage and optimize data storage solutions such as Amazon S3, Redshift, RDS, and DynamoDB. Implement and manage infrastructure-as-code (IaC) using tools like Terraform or AWS CloudFormation. Monitor and optimize the performance, cost, and scalability of More ❯
from various sources including on-premise and cloud platforms. Develop and optimize data pipelines and data integration processes between on-premises systems and AWS cloud services such as S3, Redshift, RDS, Glue , and Lambda . Collaborate with Data Architects, Data Analysts, and BI developers to understand data requirements and deliver solutions. Ensure data quality, consistency, and governance across platforms. …/star schema), and data integration patterns. Hands-on experience with SQL and relational databases (e.g., Oracle, SQL Server, PostgreSQL). Familiarity with AWS data services such as S3, Redshift, Glue, EMR, Lambda, or RDS . Knowledge of scripting languages (e.g., Python, Shell) for automation and orchestration. Experience working in Agile environments and using tools such as JIRA , Git More ❯
such as Airflow. Experience with low/no-code pipeline development tools such as Talend or SnapLogic. Experience developing data pipelines using cloud services (AWS preferred) like Lambda, S3, Redshift, Glue, Athena, Secrets Manager or equivalent services. Experience of working with APIs for data extraction and interacting with cloud resources via APIs/CLIs/SDKs (e.g. boto3). … Experience building out a data warehouse on platforms such as Redshift, Snowflake, or Databricks. Comfortable working with Git for source control (in Azure DevOps repos or equivalent). Experience working in an Agile (Scrum) environment for product delivery using Azure DevOps or similar tools. Strong problem-solving abilities with the capability to quickly analyse issues and locate performance bottle More ❯
from various sources including on-premise and cloud platforms. Develop and optimize data pipelines and data integration processes between on-premises systems and AWS cloud services such as S3, Redshift, RDS, Glue , and Lambda . Collaborate with Data Architects, Data Analysts, and BI developers to understand data requirements and deliver solutions. Ensure data quality, consistency, and governance across platforms. …/star schema), and data integration patterns. Hands-on experience with SQL and relational databases (e.g., Oracle, SQL Server, PostgreSQL). Familiarity with AWS data services such as S3, Redshift, Glue, EMR, Lambda, or RDS . Knowledge of scripting languages (e.g., Python, Shell) for automation and orchestration. Experience working in Agile environments and using tools such as JIRA , Git More ❯
London, England, United Kingdom Hybrid / WFH Options
EXL Service
guide other team members around its development/management Qualifications and experience we consider to be essential for the role: 5+ years of experience in Data Engineering: SQL, DWH (Redshift or Snowflake), Python (PySpark), Spark and associated data engineering jobs. Experience with AWS ETL pipeline services: Lambda, S3, EMR/Glue, Redshift(or Snowflake), step-functions (Preferred) Experience More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
business. Key Responsibilities Design, build, and maintain robust data pipelines using AWS services (Glue, Lambda, Step Functions, S3, etc.) Develop and optimize data lake and data warehouse solutions using Redshift, Athena, and related technologies Collaborate with data scientists, analysts, and business stakeholders to understand data requirements Ensure data quality, governance, and compliance with financial regulations Implement CI/CD More ❯
the team. Drive the design development and implementation of complex data pipelines and ETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data lineage documentation. Collaborate with data analysts data scientists business stakeholders and product owners to understand More ❯
the team. Drive the design development and implementation of complex data pipelines and ETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data lineage documentation. Collaborate with data analysts data scientists business stakeholders and product owners to understand More ❯
the team. Drive the design development and implementation of complex data pipelines and ETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data lineage documentation. Collaborate with data analysts data scientists business stakeholders and product owners to understand More ❯
the team. Drive the design development and implementation of complex data pipelines and ETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data lineage documentation. Collaborate with data analysts data scientists business stakeholders and product owners to understand More ❯
tenant, data-heavy systems, ideally in a startup or fast-moving environment. Technical Stack : Languages/Tools: Python (REST API integrations), DBT, Airbyte, GitHub Actions Modern Data Warehousing: Snowflake, Redshift, Databricks, or BigQuery. Cloud & Infra: AWS (ECS, S3, Step Functions), Docker (Kubernetes or Fargate a bonus) Data Modelling : Strong grasp of transforming structured/unstructured data into usable models More ❯
tenant, data-heavy systems, ideally in a startup or fast-moving environment. Technical Stack : Languages/Tools: Python (REST API integrations), DBT, Airbyte, GitHub Actions Modern Data Warehousing: Snowflake, Redshift, Databricks, or BigQuery. Cloud & Infra: AWS (ECS, S3, Step Functions), Docker (Kubernetes or Fargate a bonus) Data Modelling : Strong grasp of transforming structured/unstructured data into usable models More ❯
processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc) Would you like to join us as we work hard, have fun and make history? Apply for this job indicates a More ❯
to design innovative data solutions that address complex business requirements and drive decision-making. Your skills and experience Proficiency with AWS Tools: Demonstrable experience using AWS Glue, AWS Lambda, Amazon Kinesis, Amazon EMR , Amazon Athena, Amazon DynamoDB, Amazon Cloudwatch, Amazon SNS and AWS Step Functions. Programming Skills: Strong experience with modern programming languages such … as Python, Java, and Scala. Expertise in Data Storage Technologies: In-depth knowledge of Data Warehouse, Database technologies, and Big Data Eco-system technologies such as AWS Redshift, AWS RDS, and Hadoop. Experience with AWS Data Lakes: Proven experience working with AWS data lakes on AWS S3 to store and process both structured and unstructured data sets. To be More ❯
maintain scalable data pipelines to ingest, process, and store large sets of financial data from various internal and external sources. Cloud Infrastructure Management : Leverage AWS services such as S3, Redshift, Lambda, Glue, and others to develop and maintain robust cloud-based data infrastructure. Automation : Use Terraform to automate infrastructure provisioning, deployment, and configuration, ensuring efficiency and repeatability in cloud … environments. Database Design & Optimisation : Design and optimise complex SQL queries, and relational databases (e.g., AmazonRedshift, PostgreSQL, MySQL) to enable fast, efficient data retrieval and analytics. Data Transformation : Apply ETL/ELT processes to transform raw financial data into usable insights for business intelligence, reporting, and predictive analytics. Collaboration with Teams : Work closely with platform team, data analysts … financial services or similar regulated industries. Strong understanding of data engineering concepts, including data modelling, ETL/ELT processes, and data warehousing. Proven experience with AWS services (e.g., S3, Redshift, Lambda, ECS, ECR, SNS, Eventbridge, CloudWatch, Athena etc.) for building and maintaining scalable data solutions in the cloud. Technical Skills (must have): Python: Proficient in Python for developing custom More ❯
in a Principal or Lead role. Proven experience designing and delivering enterprise data strategies . Exceptional communication and stakeholder management skills. Expertise in enterprise-grade data warehouses (Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensional modelling, and More ❯