of domains — from predictive analytics to real-time personalization. ? What You'll Do: Architect and build scalable, secure, and maintainable data pipelines on AWS (Glue, Lambda, Step Functions, S3, Redshift, etc.) Operationalize machine learning models in production environments using SageMaker or custom deployment frameworks Automate and optimize ETL/ELT processes for structured and unstructured data Integrate CI/… business needs ? What We're Looking For: 5+ years of data engineering experience, with 2+ years in a senior role Deep expertise with AWS services (S3, Glue, Lambda, SageMaker, Redshift, etc.) Strong Python and SQL skills; experience with PySpark a bonus Familiarity with containerization (Docker), orchestration (Airflow, Step Functions), and infrastructure as code (Terraform/CDK) Solid understanding of More ❯
in a Principal or Lead role. Proven experience designing and delivering enterprise data strategies . Exceptional communication and stakeholder management skills. Expertise in enterprise-grade data warehouses (Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensional modelling, and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
83data
in a Principal or Lead role. Proven experience designing and delivering enterprise data strategies . Exceptional communication and stakeholder management skills. Expertise in enterprise-grade data warehouses (Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensional modelling, and More ❯
London, England, United Kingdom Hybrid / WFH Options
83zero Limited
in a Principal or Lead role. Proven experience designing and delivering enterprise data strategies . Exceptional communication and stakeholder management skills. Expertise in enterprise-grade data warehouses (Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensional modelling, and More ❯
London, England, United Kingdom Hybrid / WFH Options
83data
in a Principal or Lead role. Proven experience designing and delivering enterprise data strategies . Exceptional communication and stakeholder management skills. Expertise in enterprise-grade data warehouses (Snowflake, BigQuery, Redshift). Hands-on experience with Apache Airflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensional modelling, and More ❯
to minimise disruption. Collaborate with various teams to align migration processes with organisational goals and regulatory standards. Proficiency in AWS ETL technologies, including Glue, Data Sync, DMS, Step Functions, Redshift, DynamoDB, Athena, Lambda, RDS, EC2 and S3 Datalake, CloudWatch for building and optimizing ETL pipelines and data migration workflows. Working knowledge of Azure data engineering tools, including ADF (Azure More ❯
interested in building data and science solutions to drive strategic direction? Based in Tokyo, the Science and Data Technologies team designs, builds, operates, and scales the data infrastructure powering Amazon's retail business in Japan. Working with a diverse, global team serving customers and partners worldwide, you can make a significant impact while continuously learning and experimenting with cutting … working with large-scale data, excels in highly complex technical environments, and above all, has a passion for data. You will lead the development of data solutions to optimize Amazon's retail operations in Japan, turning business needs into robust data pipelines and architecture. Leveraging your deep experience in data infrastructure and passion for enabling data-driven business impact … with scientists, software engineers and business teams to identify and implement strategic data opportunities. Key job responsibilities Your key responsibilities include: - Create data solutions with AWS services such as Redshift, S3, EMR, Lambda, SageMaker, CloudWatch etc. - Implement robust data solutions and scalable data architectures. - Develop and improve the operational excellence, data quality, monitoring and data governance. BASIC QUALIFICATIONS - Bachelor More ❯
London, England, United Kingdom Hybrid / WFH Options
Augusta Hitech
Hands-on experience with AWS Lambda for serverless computing in data workflows. Knowledge of AWS Glue Crawler Kinesis RDS for batch/real-time data streaming Familiarity with AWS Redshift for large-scale data warehousing and analytics. Skillful in implementing data lakes using AWS Lake Formation for efficient storage and retrieval of diverse datasets. Experience with AWS Data Pipeline … solutions. Hands-on experience with AWS DMS (Database Migration Service) for seamless data migration between different databases. Knowledge of AWS Athena for interactive query processing on data stored in Amazon S3. Experience with AWS AppSync for building scalable and secure GraphQL APIs. Qualifications: A minimum of 10 years of experience in data engineering or a related field. Strong background More ❯
and SQL Solid AWS experience (Glue, Lambda, SQS, S3, etc.) Experience designing/building/maintaining ETL pipelines Data modelling experience to forecast and analyse Data warehousing knowledge (PostgreSQL, Redshift, Snowflake etc.) Self-starter mindset, you’ll get stuck in and find answers independently to bring back to the team A drive to want to own projects and put More ❯
journey of our data platform (AWS) Cloud Proficiency: Hands-on experience with at least one major cloud platform (AWS, Azure, or GCP) and its core data services (e.g., S3, Redshift, Lambda/Functions, Glue). Data Modelling: Deep understanding of ELT/ETL patterns, and data modelling techniques. CRM/Customer Data Focus: Experience working directly with data from More ❯
journey of our data platform (AWS) Cloud Proficiency: Hands-on experience with at least one major cloud platform (AWS, Azure, or GCP) and its core data services (e.g., S3, Redshift, Lambda/Functions, Glue). Data Modelling: Deep understanding of ELT/ETL patterns, and data modelling techniques. CRM/Customer Data Focus: Experience working directly with data from More ❯
City of London, London, United Kingdom Hybrid / WFH Options
OTA Recruitment
data scientists, and business stakeholders. Familiarity with cloud-based data ecosystems such as AWS, Azure, or GCP, and working with data warehouse/lakehouse technologies such as Snowflake, BigQuery, Redshift, or Athena/Glue. Essential: Proficient in writing clean, efficient, and maintainable SQL and Python code, particularly for data transformation and analytics use cases. Strong understanding of data modeling More ❯
data scientists, and business stakeholders. Familiarity with cloud-based data ecosystems such as AWS, Azure, or GCP, and working with data warehouse/lakehouse technologies such as Snowflake, BigQuery, Redshift, or Athena/Glue. Essential: Proficient in writing clean, efficient, and maintainable SQL and Python code, particularly for data transformation and analytics use cases. Strong understanding of data modeling More ❯
journey of our data platform (AWS) Cloud Proficiency: Hands-on experience with at least one major cloud platform (AWS, Azure, or GCP) and its core data services (e.g., S3, Redshift, Lambda/Functions, Glue). Data Modelling: Deep understanding of ELT/ETL patterns, and data modelling techniques. CRM/Customer Data Focus: Experience working directly with data from More ❯
journey of our data platform (AWS) Cloud Proficiency: Hands-on experience with at least one major cloud platform (AWS, Azure, or GCP) and its core data services (e.g., S3, Redshift, Lambda/Functions, Glue). Data Modelling: Deep understanding of ELT/ETL patterns, and data modelling techniques. CRM/Customer Data Focus: Experience working directly with data from More ❯
Liverpool, England, United Kingdom Hybrid / WFH Options
Intuita - Vacancies
Nice to Haves": • Certification in dbt or Google Cloud Platform or related technologies. • Experience with other cloud platforms (e.g. AWS, Azure, Snowflake) and data warehouse/lakehouse technologies (e.g. Redshift, Databricks, Synapse) • Knowledge of distributed big data technologies. • Proficiency in Python. • Familiarity with data governance and compliance frameworks. Your characteristics as a Consultant will include: • Driven by delivering quality More ❯
Business Intelligence Engineer - Locations considered: London, Paris, Madrid, Milan, Munich, Berlin, EU Heavy and Bulky Services Job ID: Amazon EU SARL (UK Branch) Locations considered: London, Paris, Madrid, Milan, Munich, Berlin Are you interested in building data warehouse and data lake solutions to shape the analytical backbone of the EU Heavy Bulky & Services team? We are hiring a Business … data engineering mentality. We seek individuals that enjoy to collaborate across stakeholders and that bring excellent statistical and analytical abilities to the team. Heavy Bulky & Services is a growing Amazon business, designed to enable Customers to enjoy website browsing, product shopping and delivery experience for our specific product portfolio. We are looking for a Business Intelligence Engineer to support … tradeoffs. - Propose and prioritize changes to reporting, create additional metrics, own the data presented. - Build an expert understanding of technologies and techniques required to build analytical solutions in the Amazon data eco-system. - Learn new systems, tools, and industry best practices to help design new studies and build new tools. A day in the life We dedicate 75% of More ❯
Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such … as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able … to delivery high quality products. About the team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across More ❯
broad range of problems using your technical skills Demonstrable experience of utilising strong communication and stakeholder management skills when engaging with customers Significant experience with cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data … technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as AmazonRedshift, Google BigQuery, or Apache Airflow Proficiency in Python and at least one other programming language such as Java, or Scala Willingness to mentor more junior members of the team Strong analytical and problem-solving skills with the ability to work More ❯
/hr W2 Responsibilities: Develop, optimize, and maintain data ingestion flows using Apache Kafka, Apache Nifi, and MySQL/PostgreSQL. Develop within AWS cloud services such as RedShift, SageMaker, API Gateway, QuickSight, and Athena. Coordinate with data owners to ensure proper configuration. Document SOPs related to streaming, batch configuration, or API management. Record details of data ingestion activities for … organizational levels. Analytical, organizational, and problem-solving skills. Experience with data observability tools like Grafana, Splunk, AWS CloudWatch, Kibana, etc. Knowledge of container technologies such as Docker, Kubernetes, and Amazon EKS. Education Requirements: Bachelor’s Degree in Computer Science, Engineering, or related field, or at least 8 years of equivalent work experience. 8+ years of IT data/system More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
INTEC SELECT LIMITED
data from diverse sources, including APIs like Facebook, Google Analytics, and payment providers. Develop and optimize data models for batch processing and real-time streaming using tools like AWS Redshift, S3, and Kafka. Lead efforts in acquiring, storing, processing, and provisioning data to meet evolving business requirements. Perform customer behavior analysis, gaming analytics, and create actionable insights to enhance … of experience in data engineering roles, with a proven ability to lead and mentor a team. Expertise in SQL, Python, and R. Strong proficiency in AWS technologies such as Redshift, S3, EC2, and Lambda. Experience with Kafka and real-time data streaming technologies. Advanced skills in building ETL pipelines and integrating data from APIs. Familiarity with data visualization and More ❯
Business Intelligence Engineer Hybrid – London Up to £450 a day Inside IR35 6 Months Key Skills: Business Intelligence (Data modelling, data warehousing, Dashboarding) SQL & Python AWS (S3, Lambda, Glue, Redshift) The Senior Business Intelligence Engineer occupies a unique role at the intersection of technology, marketing, finance, statistics, data mining, and social science. We provide the key insight into customer … ELT processes, automation Technical Requirements: - Strong proficiency in SQL and Python programming - Extensive experience with data modeling and data warehouse concepts - Advanced knowledge of AWS data services, including: S3, Redshift, AWS Glue, AWS Lambda - Experience with Infrastructure as Code using AWS CDK - Proficiency in ETL/ELT processes and best practices - Experience with data visualization tools (Quicksight) Required Skills More ❯
Business Intelligence Engineer Hybrid – London Up to £450 a day Inside IR35 6 Months Key Skills: Business Intelligence (Data modelling, data warehousing, Dashboarding) SQL & Python AWS (S3, Lambda, Glue, Redshift) The Senior Business Intelligence Engineer occupies a unique role at the intersection of technology, marketing, finance, statistics, data mining, and social science. We provide the key insight into customer … ELT processes, automation Technical Requirements: - Strong proficiency in SQL and Python programming - Extensive experience with data modeling and data warehouse concepts - Advanced knowledge of AWS data services, including: S3, Redshift, AWS Glue, AWS Lambda - Experience with Infrastructure as Code using AWS CDK - Proficiency in ETL/ELT processes and best practices - Experience with data visualization tools (Quicksight) Required Skills More ❯
ensuring data quality and integrity. Stay up-to-date with the latest trends and best practices in data engineering and cloud technologies, including AWS services such as S3, Lambda, Redshift, and Glue. Qualifications: Bachelor’s degree in computer science, engineering, or a related field. 5+ years of experience in data engineering, with a strong focus on Snowflake and AWS. … in SQL, Python, and ETL tools (Streamsets, DBT, etc.). Hands-on experience with Oracle RDBMS. Data migration experience to Snowflake. Experience with AWS services such as S3, Lambda, Redshift, and Glue. Strong understanding of data warehousing concepts and data modeling. Excellent problem-solving and communication skills, with a focus on delivering high-quality solutions. Understanding/hands-on More ❯