have requirements: Driven self-starter mentality, with the ability to work independently Python SQL experience building and maintaining ETL/data pipelines expertise in data warehousing - any of BigQuery, Redshift, Snowflake or Databricks is fine experience working with cloud infrastructures - AWS and/or GCP being most advantageous 👍 Bonus points for experience with: Airflow RudderStack, Expo and/or More ❯
have requirements: Driven self-starter mentality, with the ability to work independently Python SQL experience building and maintaining ETL/data pipelines expertise in data warehousing - any of BigQuery, Redshift, Snowflake or Databricks is fine experience working with cloud infrastructures - AWS and/or GCP being most advantageous 👍 Bonus points for experience with: Airflow RudderStack, Expo and/or More ❯
West London, London, England, United Kingdom Hybrid / WFH Options
Delaney & Bourton
selection, cost management and team management Experience required: Experience in building and scaling BI and Data Architecture Expertise in modern BI and Data DW platforms such as Snowflake, BigQuery, Redshift, Power BI etc Background in ETL/ELT tooling and Data Pipelines such as DBT, Fivetran, Airflow Experienced in Cloud based solutions (Azure, AWS or Google More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom Hybrid / WFH Options
Delaney & Bourton
selection, cost management and team management Experience required: Experience in building and scaling BI and Data Architecture Expertise in modern BI and Data DW platforms such as Snowflake, BigQuery, Redshift, Power BI etc Background in ETL/ELT tooling and Data Pipelines such as DBT, Fivetran, Airflow Experienced in Cloud based solutions (Azure, AWS or Google More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
Ignite Digital Talent
Strong hands-on experience with Python in a data context Proven skills in SQL Experience with Data Warehousing (DWH) ideally with Snowflake or similar cloud data platforms (Databricks or Redshift) Experience with DBT, Kafka, Airflow, and modern ELT/ETL frameworks Familiarity with data visualisation tools like Sisense, Looker, or Tableau Solid understanding of data architecture, transformation workflows, and More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Client Server Ltd
have strong Python and SQL coding skills You have experience with big data frameworks and tools including Spark You have a good knowledge of AWS data services (e.g. S3, Redshift, EMR, Glue) You have strong analytical, problem solving and critical thinking skills You have excellent communication skills and experience of working across teams What's in it for you More ❯
new technologies essential for automating models and advancing our engineering practices. You're familiar with cloud technologies . You have experience working with data in a cloud data warehouse (Redshift, Snowflake, Databricks, or BigQuery) Experience with a modern data modeling technology (DBT) You document and communicate clearly . Some experience with technical content writing would be a plus You More ❯
. Strong familiarity with data warehousing, data lake/lakehouse architectures, and cloud-native analytics platforms. Hands-on experience with SQL and cloud data platforms (e.g., Snowflake, Azure, AWS Redshift, GCP BigQuery). Experience with BI/analytics tools (e.g., Power BI, Tableau) and data visualization best practices. Strong knowledge of data governance, data privacy, and compliance frameworks (e.g. More ❯
and manage DBT models for data transformation and modeling in a modern data stack. Proficiency in SQL , Python , and PySpark . Experience with AWS services such as S3, Athena, Redshift, Lambda, and CloudWatch. Familiarity with data warehousing concepts and modern data stack architectures. Experience with CI/CD pipelines and version control (e.g., Git). Collaborate with data analysts More ❯
and manage DBT models for data transformation and modeling in a modern data stack. Proficiency in SQL , Python , and PySpark . Experience with AWS services such as S3, Athena, Redshift, Lambda, and CloudWatch. Familiarity with data warehousing concepts and modern data stack architectures. Experience with CI/CD pipelines and version control (e.g., Git). Collaborate with data analysts More ❯
if you have 4+ years of relevant work experience in Analytics, Business Intelligence, or Technical Operations Master in SQL, Python, and ETL using big data tools (HIVE/Presto, Redshift) Previous experience with web frameworks for Python such as Django/Flask is a plus Experience writing data pipelines using Airflow Fluency in Looker and/or Tableau Strong More ❯
scalable product adoption datasets, ensuring ease of downstream integration and rapid onboarding of new events or features. Data Egestion: Develop and manage data pipelines for exporting curated datasets from Redshift to platforms like Salesforce and Gainsight using reverse ETL tools (e.g., Hightouch). Data Ingestion: Own end-to-end responsibility for ingesting key productivity data from platforms such as More ❯
Data Engineering Manager, Amazon Music Technology We are seeking an ambitious Data Engineering Manager to join our Metrics and Data Platform team. The Metrics and Data Platform team plays a critical role in enabling Amazon Music's business decisions and data-driven software development by collecting and providing behavioral and operational metrics to our internal teams. We maintain … a scalable and robust data platform to support Amazon Music's rapid growth, and collaborate closely with data producers and data consumers to accelerate innovation using data. As a Data Engineering Manager, you will manage a team of talented Data Engineers. Your team collects billions of events a day, manages petabyte-scale datasets on Redshift and S3, and … pipelines with Spark, SQL, EMR, and Airflow. You will collaborate with product and technical stakeholders to solve challenging data modeling, data availability, data quality, and data governance problems. At Amazon Music, engineering managers are the primary drivers of their team's roadmap, priorities, and goals. You will be deeply involved in your team's execution, helping to remove obstacles More ❯
Have you ever ordered a product on Amazon and when that box with the smile arrived you wondered how it got to you so fast? Have you wondered where it came from and how much it cost Amazon to deliver it to you? If so, Amazon Logistics (AMZL), Last Mile team is for you. We manage the … delivery of tens of millions of products every week to Amazon's customers, achieving on-time delivery in a cost-effective manner to deliver a smile for our customers. Amazon Logistics is looking for a customer focused, analytically and technically skilled Data Engineer to build advanced data and reporting solutions for AMZL leadership and BI teams. This position … will be responsible for building and managing real time data pipelines, maintaining reporting infrastructures, work on complex automation pipelines leveraging AWS and building analytical tools to support our growing Amazon Logistics business in Japan. The successful candidate will be able to effectively extract, transform, load and visualize critical data to improve the latency and accuracy of the existing data More ❯
Job ID: Amazon EU SARL (UK Branch) The EU Amazon Vendor Services (AVS) and Retail Vendor Experience (VX) Program teams are seeking a Data Engineer to design and implement scalable data solutions and pipelines that can meaningfully contribute to both programs. This role is pivotal in addressing major challenges that enhance vendor success, satisfaction, and growth on Amazon, contributing directly to our long-term strategy. Amazon's mission is to be Earth's most customer-centric company, where customers can discover anything they want to buy online at competitive prices, with vast selection and convenience. Core to this mission is our commitment to delighting not only customers but also vendors by inventing scalable solutions that exceed … WW VX programme focuses on creating a globally preferred, trusted, and efficient vendor experience across all touchpoints. Both programmes are essential inputs for improving the end-customer experience and Amazon's long-term free cash flow. Key job responsibilities This role will sit within a data and analytics team supporting two large program teams (EU AVS and VX) while More ❯
Inventory Management (AIM) team seeks talented individuals passionate about solving complex problems and driving impactful business decisions for our executives. The AIM team owns critical Tier 1 metrics for Amazon Retail stores, providing key insights to improve store health monitoring. We focus on enhancing selection, product availability, inventory efficiency, and inventory readiness to fulfill customer orders (FastTrack) while enabling … BASIC QUALIFICATIONS - 3+ years of data engineering experience - 4+ years of SQL experience - Experience with data modeling, warehousing and building ETL pipelines PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases/data stores (object storage, document or key-value stores, graph databases … support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. More ❯
thrives in greenfield project environments, and enjoys working both independently and collaboratively. Key Responsibilities as a Principal Data Engineer Propose and implement data solutions using AWS services including S3, Redshift, Lambda, Step Functions, DynamoDB, AWS Glue, and Matillion . Work directly with clients to define requirements, refine solutions, and ensure successful handover to internal teams. Design and implement ETL … to a collaborative, knowledge-sharing team culture. Required Qualifications & Skills Strong experience in ETL processes and cloud data warehouse patterns . Hands-on expertise with AWS services (S3, Glue, Redshift). Proficiency with Matillion for data transformation. Experience working with various relational databases . Familiarity with data visualization tools such as QuickSight, Tableau, Looker, or QlikSense . Ability to More ❯
thrives in greenfield project environments, and enjoys working both independently and collaboratively. Key Responsibilities as a Principal Data Engineer Propose and implement data solutions using AWS services including S3, Redshift, Lambda, Step Functions, DynamoDB, AWS Glue, and Matillion . Work directly with clients to define requirements, refine solutions, and ensure successful handover to internal teams. Design and implement ETL … to a collaborative, knowledge-sharing team culture. Required Qualifications & Skills Strong experience in ETL processes and cloud data warehouse patterns . Hands-on expertise with AWS services (S3, Glue, Redshift). Proficiency with Matillion for data transformation. Experience working with various relational databases . Familiarity with data visualization tools such as QuickSight, Tableau, Looker, or QlikSense . Ability to More ❯
thrives in greenfield project environments, and enjoys working both independently and collaboratively. Key Responsibilities as a Principal Data Engineer Propose and implement data solutions using AWS services including S3, Redshift, Lambda, Step Functions, DynamoDB, AWS Glue, and Matillion . Work directly with clients to define requirements, refine solutions, and ensure successful handover to internal teams. Design and implement ETL … to a collaborative, knowledge-sharing team culture. Required Qualifications & Skills Strong experience in ETL processes and cloud data warehouse patterns . Hands-on expertise with AWS services (S3, Glue, Redshift). Proficiency with Matillion for data transformation. Experience working with various relational databases . Familiarity with data visualization tools such as QuickSight, Tableau, Looker, or QlikSense . Ability to More ❯
thrives in greenfield project environments, and enjoys working both independently and collaboratively. Key Responsibilities as a Principal Data Engineer Propose and implement data solutions using AWS services including S3, Redshift, Lambda, Step Functions, DynamoDB, AWS Glue, and Matillion . Work directly with clients to define requirements, refine solutions, and ensure successful handover to internal teams. Design and implement ETL … to a collaborative, knowledge-sharing team culture. Required Qualifications & Skills Strong experience in ETL processes and cloud data warehouse patterns . Hands-on expertise with AWS services (S3, Glue, Redshift). Proficiency with Matillion for data transformation. Experience working with various relational databases . Familiarity with data visualization tools such as QuickSight, Tableau, Looker, or QlikSense . Ability to More ❯
collaborate directly with clients to shape strategy, drive delivery, and guide internal engineering standards. Your responsibilities: Build and maintain large-scale data lakes and ETL pipelines using AWS S3, Redshift, Glue, Lambda, DynamoDB, and Matillion Translate client requirements into scalable and secure data architectures Drive infrastructure-as-code and CI/CD deployment practices Process structured and semi-structured … in fast-paced, high-value engagements This Principal Data Engineer will bring: Extensive experience with ETL/ELT pipelines and data transformation patterns Proficiency in AWS cloud services , particularly Redshift, Glue, Matillion, and S3 Strong command of data quality, data lineage, and metadata practices Fluency in database technologies (both relational and NoSQL) Experience with Linux environments and data visualisation More ❯
the data platform, including data pipelines, orchestration and modelling. Lead the team in building and maintaining robust data pipelines, data models, and infrastructure using tools such as Airflow, AWS Redshift, DBT and Looker.Ensuring the team follows agile methodologies to improve delivery cadence and responsiveness. Contribute to hands-on coding, particularly in areas requiring architectural input, prototyping, or critical delivery … Strong mentoring skills and ability to foster team growth and development Strong understanding of the data engineering lifecycle, from ingestion to consumption Hands-on experience with our data stack (Redshift, Airflow, Python, DVT, MongoDB, AWS, Looker, Docker) Understanding of data modelling, transformation, and orchestration best practices Experience delivering both internal analytics platforms and external data-facing products Knowledge of More ❯
the data platform, including data pipelines, orchestration and modelling. Lead the team in building and maintaining robust data pipelines, data models, and infrastructure using tools such as Airflow, AWS Redshift, DBT and Looker.Ensuring the team follows agile methodologies to improve delivery cadence and responsiveness. Contribute to hands-on coding, particularly in areas requiring architectural input, prototyping, or critical delivery … Strong mentoring skills and ability to foster team growth and development Strong understanding of the data engineering lifecycle, from ingestion to consumption Hands-on experience with our data stack (Redshift, Airflow, Python, DVT, MongoDB, AWS, Looker, Docker) Understanding of data modelling, transformation, and orchestration best practices Experience delivering both internal analytics platforms and external data-facing products Knowledge of More ❯
/D Inside IR35. Key Responsibilities Architect, implement, and manage infrastructure using Terraform , ensuring security, scalability, and reliability. Configure and optimize AWS services such as EC2, S3, Lambda, IAM, Redshift, and VPC to support business needs. Develop and maintain CI/CD pipelines using Git/GitLab and Jenkins for automated deployments and testing. Apply DevOps methodologies to streamline More ❯
/D Inside IR35. Key Responsibilities Architect, implement, and manage infrastructure using Terraform , ensuring security, scalability, and reliability. Configure and optimize AWS services such as EC2, S3, Lambda, IAM, Redshift, and VPC to support business needs. Develop and maintain CI/CD pipelines using Git/GitLab and Jenkins for automated deployments and testing. Apply DevOps methodologies to streamline More ❯