with data privacy regulations. Technical Competencies The role is a hands-on technical leadership role with advanced experience in at least most of the following technologies Cloud Platforms: AWS (Amazon Web Services): Knowledge of services like S3, EC2, Lambda, RDS, Redshift, EMR, SageMaker, Glue, and Kinesis. Azure: Proficiency in services like Azure Blob Storage, Azure Data Lake, VMs … Formation, Azure Purview. Data Security Tools: AWS Key Management Service (KMS), Azure Key Vault. Data Analytics & BI: Visualization Tools: Tableau, Power BI, Looker, and Grafana. Analytics Services: AWS Athena, Amazon QuickSight, Azure Stream Analytics. Development & Collaboration Tools: Version Control: Git (and platforms like GitHub, GitLab). CI/CD Tools: Jenkins, Travis CI, AWS CodePipeline, Azure DevOps. Other Key More ❯
the team. Drive the design development and implementation of complex data pipelines and ETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data lineage documentation. Collaborate with data analysts data scientists business stakeholders and product owners to understand More ❯
processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc Would you like to join us as we work hard, have fun and make history? Apply for this job indicates a More ❯
AWS CloudFormation, Terraform). Deep expertise in AWS (EC2, S3, Lambda, API Gateway) or equivalent services in GCP/Azure. Proficiency with SQL and NoSQL databases (e.g., MySQL, PostgreSQL, Redshift, Snowflake, DynamoDB, MongoDB). Strong coding practices and experience implementing robust data quality and monitoring frameworks. Understanding of data security, governance, and compliance. Working experience of data governance and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
I3 Resourcing Limited
Skillset Delivery experience Building solutions in snowflake Insurance experience - advantageous but not necessary MUST HAVE SNOWFLAKE, AWS, SNOWPRO CORE Key Responsibilities: Lead the design and implementation of Snowflake [and Redshift] based data warehousing solutions within an AWS environment Mentoring of team members through code reviews and pair programming Build and support new AWS native cloud data warehouse solutions Develop … experience as a data engineer with a strong focus on Snowflake and AWS services in large-scale enterprise environments Extensive experience in AWS services, e.g. EC2, S3, RDS, DynamoDB, Redshift, Lambda, API Gateway Strong SQL skills for complex data queries and transformations Python programming for data processing and analysis is a plus Strong acumen for application health through performance More ❯
focus on building scalable data solutions. Experience with data pipeline orchestration tools such as Dagster or similar. Familiarity with cloud platforms (e.g. AWS) and their data services (e.g., S3, Redshift, Snowflake). Understanding of data warehousing concepts and experience with modern warehousing solutions. Experience with GitHub Actions (or similar) and implementing CI/CD pipelines for data workflows and More ❯
interested in building data and science solutions to drive strategic direction? Based in Tokyo, the Science and Data Technologies team designs, builds, operates, and scales the data infrastructure powering Amazon's retail business in Japan. Working with a diverse, global team serving customers and partners worldwide, you can make a significant impact while continuously learning and experimenting with cutting … working with large-scale data, excels in highly complex technical environments, and above all, has a passion for data. You will lead the development of data solutions to optimize Amazon's retail operations in Japan, turning business needs into robust data pipelines and architecture. Leveraging your deep experience in data infrastructure and passion for enabling data-driven business impact … with scientists, software engineers and business teams to identify and implement strategic data opportunities. Key job responsibilities Your key responsibilities include: - Create data solutions with AWS services such as Redshift, S3, EMR, Lambda, SageMaker, CloudWatch etc. - Implement robust data solutions and scalable data architectures. - Develop and improve the operational excellence, data quality, monitoring and data governance. BASIC QUALIFICATIONS - Bachelor More ❯
journey of our data platform (AWS) Cloud Proficiency: Hands-on experience with at least one major cloud platform (AWS, Azure, or GCP) and its core data services (e.g., S3, Redshift, Lambda/Functions, Glue). Data Modelling: Deep understanding of ELT/ETL patterns, and data modelling techniques. CRM/Customer Data Focus: Experience working directly with data from More ❯
Business Intelligence Engineer - Locations considered: London, Paris, Madrid, Milan, Munich, Berlin, EU Heavy and Bulky Services Job ID: Amazon EU SARL (UK Branch) Locations considered: London, Paris, Madrid, Milan, Munich, Berlin Are you interested in building data warehouse and data lake solutions to shape the analytical backbone of the EU Heavy Bulky & Services team? We are hiring a Business … data engineering mentality. We seek individuals that enjoy to collaborate across stakeholders and that bring excellent statistical and analytical abilities to the team. Heavy Bulky & Services is a growing Amazon business, designed to enable Customers to enjoy website browsing, product shopping and delivery experience for our specific product portfolio. We are looking for a Business Intelligence Engineer to support … tradeoffs. - Propose and prioritize changes to reporting, create additional metrics, own the data presented. - Build an expert understanding of technologies and techniques required to build analytical solutions in the Amazon data eco-system. - Learn new systems, tools, and industry best practices to help design new studies and build new tools. A day in the life We dedicate 75% of More ❯
Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such … as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able … to delivery high quality products. About the team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across More ❯
and transformation workflows Model and maintain curated data layers to support reporting, analytics, and decision-making Ensure high availability, scalability, and performance of data warehouse systems (cloud-based, e.g., Redshift) Develop & Manage Data Products: Collaborate with business and domain experts to define and deliver high-value, reusable data products Implement best practices around versioning, SLAs, data contracts, and quality … For 3+ years of experience as a Data Engineer, with a strong focus on data warehousing and data modeling Hands-on experience with cloud-native data tech (preferably AWS: Redshift, Glue, S3, Lambda, IAM, Terraform, GitHub, CI/CD) Proficiency in SQL and Python for data processing and automation Experience working with data modeling tools and practices (e.g., dimensional More ❯
Strong background in software engineering , with expertise in cloud computing and DevOps practices . Hands-on experience building, deploying, and maintaining services in AWS (e.g., EC2, Lambda, S3, RDS, Redshift, and other AWS services). Proficiency in programming languages such as Python (preferred), Java, or Go. Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Experience building scalable More ❯
Snowflake (data warehousing and performance tuning) Informatica (ETL/ELT development and orchestration) - nice to have Python (data processing and scripting) - required AWS (data services such as S3, Glue, Redshift, Lambda) - required Cloud data practices and platform - AWS required Basic knowledge of related disciplines such as data science, software engineering, and business analytics. Proven ability to independently resolve complex More ❯
Plumstead, Greater London, UK Hybrid / WFH Options
FalconSmartIT
best practices and aligned with the client's enterprise strategy. Proven experience as a Data Solution Architect, with expertise in various AWS services such as EC2, S3, Aurora, DynamoDB, Redshift, Lambda, Glue, Athena, Quicksight, and Sagemaker, among others. Proficiency in Snowflake's architecture and familiarity with its features, including Snowpipe, Streams, and Tasks for real-time data processing. Additionally More ❯
Engineers and Associates, and lead technical discussions and design sessions. Key requirements: Must-Have: Strong experience with AWS services: Glue, Lambda, S3, Athena, Step Functions, EventBridge, EMR, EKS, RDS, Redshift, DynamoDB. Strong Python development skills. Proficient with Docker , containerization, and virtualization. Hands-on experience with CI/CD , especially GitLab CI. Solid experience with Infrastructure as Code (Terraform, CloudFormation More ❯
NumPy, Pandas, SQL Alchemy) and expert-level SQL across multiple database platforms Hands-on experience with modern data stack tools including dbt, Airflow, and cloud data warehouses (Snowflake, BigQuery, Redshift) Strong understanding of data modelling, schema design, and building maintainable ELT/ETL pipelines Experience with cloud platforms (AWS, Azure, GCP) and infrastructure-as-code practices Familiarity with data More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
8+1 hours). Responsibilities: Design, develop, and maintain data pipelines using Apache Airflow . Create and support data storage systems (Data Lakes/Data Warehouses) based on AWS (S3, Redshift, Glue, Athena, etc.). Integrate data from various sources, including mobile apps , third-party APIs , and internal services . Ensure data quality, consistency, and availability . Support analysts: build … including 1+ year at a Senior level. Deep knowledge of Airflow : DAGs, custom operators, and monitoring. Strong command of PostgreSQL databases; familiarity with the AWS stack (S3, Glue or Redshift, Lambda, CloudWatch) is a significant plus. Excellent SQL skills and confident Python programming. Knowledge of Kotlin and Golang , and the ability to work with unfamiliar codebases. Experience building robust More ❯
requirements. Design and develop scalable AWS architectures for API-based and data-centric applications. Define data pipelines, ETL processes, and storage solutions using AWS services such as S3, OpenSearch, Redshift, Step Functions, Lambda, Glue, and Athena. Architect RESTful APIs, ensuring security, performance, and scalability. Optimise microservices architecture and API management strategies, leveraging tools such as KONG Gateway, Lambda, and … with a strong focus on AWS cloud services. Expertise in API design, microservices architecture, and cloud-native development. Hands-on experience with AWS services including EKS, Lambda, DynamoDB, S3, Redshift, RDS, Glue, Athena. Strong knowledge of serverless architectures, event-driven patterns, and containerization. Experience designing and implementing secure, scalable, and high-availability architectures. Solid understanding of networking, security, authentication More ❯
ELT processes, automation Technical Requirements: - Strong proficiency in SQL and Python programming - Extensive experience with data modeling and data warehouse concepts - Advanced knowledge of AWS data services, including: S3, Redshift, AWS Glue, AWS Lambda - Experience with Infrastructure as Code using AWS CDK - Proficiency in ETL/ELT processes and best practices - Experience with data visualization tools (Quicksight) Required Skills More ❯
ELT processes, automation Technical Requirements: - Strong proficiency in SQL and Python programming - Extensive experience with data modeling and data warehouse concepts - Advanced knowledge of AWS data services, including: S3, Redshift, AWS Glue, AWS Lambda - Experience with Infrastructure as Code using AWS CDK - Proficiency in ETL/ELT processes and best practices - Experience with data visualization tools (Quicksight) Required Skills More ❯
ELT processes, automation Technical Requirements: - Strong proficiency in SQL and Python programming - Extensive experience with data modeling and data warehouse concepts - Advanced knowledge of AWS data services, including: S3, Redshift, AWS Glue, AWS Lambda - Experience with Infrastructure as Code using AWS CDK - Proficiency in ETL/ELT processes and best practices - Experience with data visualization tools (Quicksight) Required Skills More ❯
ELT processes, automation Technical Requirements: - Strong proficiency in SQL and Python programming - Extensive experience with data modeling and data warehouse concepts - Advanced knowledge of AWS data services, including: S3, Redshift, AWS Glue, AWS Lambda - Experience with Infrastructure as Code using AWS CDK - Proficiency in ETL/ELT processes and best practices - Experience with data visualization tools (Quicksight) Required Skills More ❯
Employment Type: Contract
Rate: £350 - £400/day PTO, pension and national insurance
or Kafka Streams)? Select Which statement best describes your hands on responsibility for architecting and tuning cloud native data lake/warehouse solutions (e.g., AWS S3 + Glue/Redshift, GCP BigQuery, Azure Synapse)? Select What best reflects your experience building ETL/ELT workflows with Apache Airflow (or similar) and integrating them into containerised CI/CD pipelines More ❯