London, England, United Kingdom Hybrid / WFH Options
DataAnalystJobs.io
Warehousing(highly desirable): Experience working on a data warehouse solution irrespective of underlying technology. Experience using cloud data warehouse technology would also be beneficial - Snowflake (preferred), Google BigQuery, AWS Redshift or Azure Synapse. Data Pipeline(highly desirable): Demonstrable experience working with data from a wide variety of data sources including different database platforms, flat files, API’s and event More ❯
building and maintaining data pipelines using tools like Airflow, dbt, or similar Proven success in managing structured and unstructured data at scale Familiarity with cloud data platforms (e.g. AWS Redshift, Google BigQuery, Snowflake) Understanding of data modeling and schema design Experience integrating APIs and managing streaming/batch data flows Demonstrated ability to support ML/AI workflows with More ❯
NumPy, Pandas, SQL Alchemy) and expert-level SQL across multiple database platforms Hands-on experience with modern data stack tools including dbt, Airflow, and cloud data warehouses (Snowflake, BigQuery, Redshift) Strong understanding of data modelling, schema design, and building maintainable ELT/ETL pipelines Experience with cloud platforms (AWS, Azure, GCP) and infrastructure-as-code practices Familiarity with data More ❯
Business Intelligence Engineer, Japan Operations The Amazon ARTS (APEX-RoW Technology Solutions) team is looking for a Business Intelligence Engineer to optimize in one of the world's largest and most complex data warehouse environments. You are expected being passionate about working with huge database and someone who loves to bring datasets together to create dashboards and business reports. … programs with limited guide from manager: 1) optimize user's data pipeline and SQL and database infrastructure across RoW; 2) help business stakeholders to provide data pipeline solutions with Amazon/ARTS internal tech products and AWS services (e.g. Redshift, SageMaker, EMR, ETL tools, data lake); 3) develop scripts for maintenance automation; 4) execute the tech implementation with … a clear milestone planning and lead it to smooth launch; 5) help business stakeholders ramp up with team internal and Amazon tech products (e.g., Redshift Cluster, ETL tools, query performance) A day in the life An average day may look like: - Attend daily standup to give status updates - Monitor, maintain and optimize the team data infra health and More ❯
ensure the privacy of our customers. Our mission is to provide a commerce foundation that accelerates business innovation and delivers a secure, available, performant, and reliable shopping experience to Amazon’s customers. The goal of the eCS Data Engineering and Analytics team is to provide high quality, on-time reports to Amazon business teams, enabling them to expand … globally at scale. Our team has a direct impact on retail CX, a key component that runs our Amazon fly wheel. As a Data Engineer, you will own the architecture of DW solutions for the Enterprise using multiple platforms. You would have the opportunity to lead the design, creation and management of extremely large datasets working backwards from business … as service which will have an immediate influence on day-to-day decision making. Key job responsibilities Develop data products, infrastructure and data pipelines leveraging AWS services (such as Redshift, Kinesis, EMR, Lambda etc.) and internal BDT tools (DataNet, Cradle, Quick Sight etc.) Improve existing solutions and come up with next generation Data Architecture to improve scale, quality, timeliness More ❯
DESCRIPTION Disrupting the way Amazon fulfills our customers' orders. Amazon operations is changing the way we improve Customer Experience through flawless fulfillment focused on 1) successful on-time delivery, 2) at speed and 3) at the lowest possible cost. Being the engine of Amazon Operational excellence, driving zero defects through ideal operation, being the heart of the … to build scalable data models, and dive deep into our data with a strong bias for action to generate insights that drive business improvements. Your work will directly impact Amazon's operational efficiency and customer experience worldwide. Key job responsibilities - Translating business requirements into modular and generic data infrastructure, implementing and managing scalable data platforms that facilitate self-service … insights generation and scientific model building, and handling large-scale datasets while creating maintainable, efficient data components. - Design and implement automation to achieve Best at Amazon standards for system efficiency, IMR efficiency, data availability, consistency, and compliance. - Work within a sophisticated technical environment, you'll interface with various technology teams to extract, transform, and load data from diverse sources More ❯
Job ID: AWS ProServe IN - Maharashtra The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a … candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most … the ability to explain technical concepts to both technical and non-technical audiences - Experience in non-relational databases - DynamoDB, Mongo, etc. - Experience in MPP data warehouse solutions such as Redshift, Greenplum, Snowflake etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during More ❯
London, England, United Kingdom Hybrid / WFH Options
RED Global
. We are looking for skills in the following: Proven experience as an AWS Data SME or AWS Data Engineer , working extensively with AWS cloud services. Expertise in AWS Redshift, Glue, Lambda, Terraform, Kinesis, Athena, and EMR . Strong ETL/ELT development and data warehousing experience. Proficiency in Python, Java, or Scala for data processing and automation. In … depth knowledge of SQL, Apache Kafka, and Amazon RDS . Experience in data security, governance, and compliance best practices . Familiarity with CI/CD pipelines, DevOps methodologies, and monitoring/logging best practices . Strong problem-solving skills , with the ability to work in a collaborative and fast-paced environment . Preferred Qualifications: AWS Certified Data Analytics - Specialty More ❯
Strong knowledge of relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra) for efficient data storage and retrieval. Data Warehousing : Experience with data warehousing solutions, such as AmazonRedshift, Google BigQuery, Snowflake, or Azure Synapse Analytics, including data modelling and ETL processes. ETL Processes: Proficient in designing and implementing ETL (Extract, Transform, Load) processes using tools More ❯
data from diverse sources, including APIs like Facebook, Google Analytics, and payment providers. Develop and optimize data models for batch processing and real-time streaming using tools like AWS Redshift, S3, and Kafka. Lead efforts in acquiring, storing, processing, and provisioning data to meet evolving business requirements. Perform customer behavior analysis, gaming analytics, and create actionable insights to enhance … of experience in data engineering roles, with a proven ability to lead and mentor a team. Expertise in SQL, Python, and R. Strong proficiency in AWS technologies such as Redshift, S3, EC2, and Lambda. Experience with Kafka and real-time data streaming technologies. Advanced skills in building ETL pipelines and integrating data from APIs. Familiarity with data visualization and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Intec Select
data from diverse sources, including APIs like Facebook, Google Analytics, and payment providers. Develop and optimize data models for batch processing and real-time streaming using tools like AWS Redshift, S3, and Kafka. Lead efforts in acquiring, storing, processing, and provisioning data to meet evolving business requirements. Perform customer behavior analysis, gaming analytics, and create actionable insights to enhance … of experience in data engineering roles, with a proven ability to lead and mentor a team. Expertise in SQL, Python, and R. Strong proficiency in AWS technologies such as Redshift, S3, EC2, and Lambda. Experience with Kafka and real-time data streaming technologies. Advanced skills in building ETL pipelines and integrating data from APIs. Familiarity with data visualization and More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Intec Select
data from diverse sources, including APIs like Facebook, Google Analytics, and payment providers. Develop and optimize data models for batch processing and real-time streaming using tools like AWS Redshift, S3, and Kafka. Lead efforts in acquiring, storing, processing, and provisioning data to meet evolving business requirements. Perform customer behavior analysis, gaming analytics, and create actionable insights to enhance … of experience in data engineering roles, with a proven ability to lead and mentor a team. Expertise in SQL, Python, and R. Strong proficiency in AWS technologies such as Redshift, S3, EC2, and Lambda. Experience with Kafka and real-time data streaming technologies. Advanced skills in building ETL pipelines and integrating data from APIs. Familiarity with data visualization and More ❯
London, England, United Kingdom Hybrid / WFH Options
KennedyPearce Consulting
provisioning and management of AWS resources. Ensure that infrastructure is designed for scalability, security, and cost-effectiveness. Design and optimise data storage solutions using AWS services such as S3, Redshift, RDS, and DynamoDB. Ensure data is stored efficiently and securely, with appropriate backup and disaster recovery strategies. Develop Python scripts to automate data processing tasks and improve workflow efficiency. … Experience : 3+ years of experience as a Data Engineer or in a similar role. Proven experience with AWS services (e.g., S3, Redshift, RDS, Glue, Lambda, IAM). Strong expertise in Terraform Proficient in SQL for querying relational databases and handling large datasets. Experience with data pipeline orchestration tools (e.g., Apache Airflow, AWS Step Functions). Familiarity with CI/ More ❯
City of London, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
it, and transforming complex datasets AWS Skillset Delivery experience Building solutions in snowflake Insurance experience – advantageous but not necessary Key Responsibilities: Lead the design and implementation of Snowflake [and Redshift] based data warehousing solutions within an AWS environment Mentoring of team members through code reviews and pair programming Build and support new AWS native cloud data warehouse solutions Develop … experience as a data engineer with a strong focus on Snowflake and AWS services in large-scale enterprise environments Extensive experience in AWS services, e.g. EC2, S3, RDS, DynamoDB, Redshift, Lambda, API Gateway Strong SQL skills for complex data queries and transformations Python programming for data processing and analysis is a plus Strong acumen for application health through performance More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … a similar role. Ability to lead and mentor the architects. Required Skills : Mandatory Skills [at least 2 Hyperscalers]: GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF. Preferred Skills : Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … a similar role. Ability to lead and mentor the architects. Required Skills : Mandatory Skills [at least 2 Hyperscalers]: GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF. Preferred Skills : Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … a similar role. Ability to lead and mentor the architects. Required Skills : Mandatory Skills [at least 2 Hyperscalers]: GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF. Preferred Skills : Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and Platform Engineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability … of data pipelines. Database Engineering: Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models for unified access and processing. Data Governance & Stewardship: Implement robust data governance , access control , and stewardship policies aligned More ❯
data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and Platform Engineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability … of data pipelines. Database Engineering: Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models for unified access and processing. Data Governance & Stewardship: Implement robust data governance , access control , and stewardship policies aligned More ❯
data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and Platform Engineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability … of data pipelines. Database Engineering : Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models for unified access and processing. Data Governance & Stewardship: Implement robust data governance , access control , and stewardship policies aligned More ❯
data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and Platform Engineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability … of data pipelines. Database Engineering : Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models for unified access and processing. Data Governance & Stewardship: Implement robust data governance , access control , and stewardship policies aligned More ❯
requirements. Design and develop scalable AWS architectures for API-based and data-centric applications. Define data pipelines, ETL processes, and storage solutions using AWS services such as S3, OpenSearch, Redshift, Step Functions, Lambda, Glue, and Athena. Architect RESTful APIs, ensuring security, performance, and scalability. Optimise microservices architecture and API management strategies, leveraging tools such as KONG Gateway, Lambda, and … with a strong focus on AWS cloud services. Expertise in API design, microservices architecture, and cloud-native development. Hands-on experience with AWS services including EKS, Lambda, DynamoDB, S3, Redshift, RDS, Glue, Athena. Strong knowledge of serverless architectures, event-driven patterns, and containerization. Experience designing and implementing secure, scalable, and high-availability architectures. Solid understanding of networking, security, authentication More ❯
data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and Platform Engineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability … of data pipelines. Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models for unified access and processing. Data Governance & Stewardship: Implement robust data governance , access control , and stewardship policies aligned with compliance More ❯