London, England, United Kingdom Hybrid / WFH Options
Doit Intl
/or AWS Machine Learning Specialty Proven track record of architecting, deploying, and optimizing complex cloud solutions for AI-driven workloads. Generative AI and Cutting-Edge Technologies: Expertise in Amazon Bedrock for deploying foundation models and managing scalable GenAI workloads. Proficiency in fine-tuning and deploying Large Language Models (LLMs) and multimodal AI using Amazon SageMaker JumpStart and … Hugging Face on AWS. Experience leveraging Amazon Q (formerly AWS CodeWhisperer) Business and Developer for AI-powered coding productivity and automation. Machine Learning Frameworks and Pipelines: In-depth knowledge of Amazon SageMaker, including Pipelines, Model Monitor, Data Wrangler, and SageMaker Clarify for bias detection and interpretability. Skilled in distributed model training with multi-GPU clusters and optimization for … high-performance inference. Data Engineering and AI Workflow Optimization: Proficient in building data pipelines with Amazon S3, AWS Glue, Lake Formation, and Redshift for AI and ML workloads. Experienced in optimizing data preparation for large-scale AI model training and inference workflows. AI Integration and Deployment: Expertise in building end-to-end AI pipelines using AWS Lambda, Step More ❯
Atlanta, Georgia, United States Hybrid / WFH Options
Pyramid Consulting Inc
and regulations. Optimize data storage, processing, and retrieval mechanisms for performance and cost-efficiency. Automate deployment, monitoring, and maintenance tasks using AWS services such as AWS Glue, AWS Lambda, Amazon EMR, etc. Conduct performance tuning and optimization of data processing workflows to ensure high availability and reliability. Stay up-to-date with the latest AWS technologies and trends in … field. Proven experience as a Data Engineer, Data Architect, or similar role with a focus on AWS technologies. In-depth knowledge of AWS services such as S3, Glue, EMR, Redshift, Athena, Kinesis, etc. Strong programming skills in languages such as Python, Scala, or Java. Hands-on experience with big data processing frameworks like Apache Spark, Apache Hadoop, etc. Experience More ❯
Boku Inc. (BOKU.L) is the leading global provider of local mobile-first payments solutions. Global brands including Amazon, DAZN, Meta, Google, Microsoft, Netflix, Sony, Spotify, and Tencent rely on Boku to reach millions of new paying consumers who do not use credit cards with our purpose-built payment network of more than 300 local payment methods across 70+ countries. … Technical Skills: Strong proficiency in SQL and Python for data transformation and scripting Experience building data models with dbt or similar tools Understanding of modern data warehousing platforms (e.g. Redshift) Familiarity with data modelling techniques and transformation best practices Experience with API integration and working with external data sources Exposure to BI tools (e.g., Tableau, Power BI, etc) and More ❯
based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities We are excited to offer an Application Architect position specialising in the Amazon Web Services (AWS) cloud environment. This role is perfect for an experienced professional who wants to leverage their expertise in designing scalable, secure, and efficient cloud architectures. This role … tools like Kubernetes on AWS. Familiarity with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Knowledge of data engineering and experience with AWS data services like Redshift, Glue, or Kinesis. Experience with CI/CD pipelines and automation on AWS, such as using AWS CodePipeline, CodeBuild, and CodeDeploy ABOUT BUSINESS UNIT IBM Consulting is IBM's More ❯
Proficiency in SQL, Python, and ETL tools (Streamsets, DBT etc.) >Hands on experience with Oracle RDBMS >Data Migration experience to Snowflake >Experience with AWS services such as S3, Lambda, Redshift, and Glue. >Strong understanding of data warehousing concepts and data modeling. >Excellent problem-solving and communication skills, with a focus on delivering high-quality solutions. >Understanding/hands on More ❯
excellence — this could be the one. What you’ll do as a Principal Data Engineer: • Lead the architecture and build of modern data platforms using AWS-native tools (S3, Redshift, Lambda, Glue, DynamoDB, Matillion, etc.). • Drive best practices in CI/CD, Infrastructure-as-Code, and data ops methodologies. • Liaise directly with clients to define requirements, shape technical … production-ready. • Be an advocate of clean code, DevOps, and hands-on engineering leadership. Skills you’ll bring as a Principal Data Engineer: • Extensive experience with AWS data services (Redshift, Glue, S3, etc.). • Hands-on with ETL/ELT using Matillion or similar tools. • Strong Python and SQL skills, with a good understanding of data formats • Demonstrated experience More ❯
excellence — this could be the one. What you’ll do as a Principal Data Engineer: • Lead the architecture and build of modern data platforms using AWS-native tools (S3, Redshift, Lambda, Glue, DynamoDB, Matillion, etc.). • Drive best practices in CI/CD, Infrastructure-as-Code, and data ops methodologies. • Liaise directly with clients to define requirements, shape technical … production-ready. • Be an advocate of clean code, DevOps, and hands-on engineering leadership. Skills you’ll bring as a Principal Data Engineer: • Extensive experience with AWS data services (Redshift, Glue, S3, etc.). • Hands-on with ETL/ELT using Matillion or similar tools. • Strong Python and SQL skills, with a good understanding of data formats • Demonstrated experience More ❯
Proficiency in SQL, Python, and ETL tools (Streamsets, DBT etc.) >Hands on experience with Oracle RDBMS >Data Migration experience to Snowflake >Experience with AWS services such as S3, Lambda, Redshift, and Glue. >Strong understanding of data warehousing concepts and data modeling. >Excellent problem-solving and communication skills, with a focus on delivering high-quality solutions. >Understanding/hands on More ❯
Proficiency in SQL, Python, and ETL tools (Streamsets, DBT etc.) >Hands on experience with Oracle RDBMS >Data Migration experience to Snowflake >Experience with AWS services such as S3, Lambda, Redshift, and Glue. >Strong understanding of data warehousing concepts and data modeling. >Excellent problem-solving and communication skills, with a focus on delivering high-quality solutions. >Understanding/hands on More ❯
excellence — this could be the one. What you’ll do as a Principal Data Engineer: • Lead the architecture and build of modern data platforms using AWS-native tools (S3, Redshift, Lambda, Glue, DynamoDB, Matillion, etc.). • Drive best practices in CI/CD, Infrastructure-as-Code, and data ops methodologies. • Liaise directly with clients to define requirements, shape technical … production-ready. • Be an advocate of clean code, DevOps, and hands-on engineering leadership. Skills you’ll bring as a Principal Data Engineer: • Extensive experience with AWS data services (Redshift, Glue, S3, etc.). • Hands-on with ETL/ELT using Matillion or similar tools. • Strong Python and SQL skills, with a good understanding of data formats • Demonstrated experience More ❯
london (city of london), south east england, united kingdom
Anson McCade
excellence — this could be the one. What you’ll do as a Principal Data Engineer: • Lead the architecture and build of modern data platforms using AWS-native tools (S3, Redshift, Lambda, Glue, DynamoDB, Matillion, etc.). • Drive best practices in CI/CD, Infrastructure-as-Code, and data ops methodologies. • Liaise directly with clients to define requirements, shape technical … production-ready. • Be an advocate of clean code, DevOps, and hands-on engineering leadership. Skills you’ll bring as a Principal Data Engineer: • Extensive experience with AWS data services (Redshift, Glue, S3, etc.). • Hands-on with ETL/ELT using Matillion or similar tools. • Strong Python and SQL skills, with a good understanding of data formats • Demonstrated experience More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
infrastructure. The ideal candidate will have strong software development experience with Python and SQL, along with an interest in Docker, Kubernetes, Airflow, and AWS data technologies such as Athena, Redshift, and EMR. You will join a team of over 25 engineers across mobile, web, data, and platform disciplines, emphasizing attention to detail and quality in feature implementation. Minimum Requirements … At least 3 years of relevant data engineering experience Strong Python and SQL skills Experience with dbt Experience with AWS Experience working with columnar databases like Redshift Proficiency in ETL/ELT and data pipeline management Familiarity with Snowplow Experience with data integration from diverse sources Good cross-team communication skills Knowledge of CI/CD practices, unit testing More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
london (city of london), south east england, united kingdom
Vallum Associates
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache Spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
london (city of london), south east england, united kingdom
Vallum Associates
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
ETL tools such as StreamSets and DBT . Hands-on experience with Oracle RDBMS . Proven data migration experience to Snowflake . Experience with AWS services including S3 , Lambda , Redshift , and Glue . Solid understanding of data warehousing concepts and data modeling . Excellent problem-solving and communication skills focused on delivering high-quality solutions. Understanding or hands-on More ❯
London, England, United Kingdom Hybrid / WFH Options
Ziff Davis
candidate will have strong software development experience with Python and SQL, along with a keen interest in working with Docker, Kubernetes, Airflow, and AWS data technologies such as Athena, Redshift, and EMR. You will join a team of over 25 engineers across mobile, web, data, and platform domains, and should demonstrate excellent attention to detail and a commitment to … you: At least 3 years of relevant data engineering experience Strong Python and SQL skills Experience with dbt Experience with AWS Experience working with a columnar database such as Redshift Extensive experience with ETL/ELT and managing data pipelines Familiarity with Snowplow Experience integrating data from various sources Good cross-team communication skills Familiarity with CI/CD More ❯
data workflows and orchestration using Managed Airflow (MWAA). Write and optimize SQL and Python/PySpark scripts for data transformation and processing. Work with data platforms such as Redshift and S3 for efficient data storage and retrieval. Implement data cataloging and governance using Glue and Lake Formation. Support reporting and analytics teams by ensuring data availability for QuickSight … in Computer Science, Data Engineering, or a related field. 7+ years of experience in data engineering, ETL development, or similar roles. Proficiency in AWS data technologies, including AWS Glue, Redshift, S3, and Lake Formation. Strong programming skills in SQL, Python, and PySpark. Experience with data pipeline orchestration using Managed Airflow (MWAA). Familiarity with data modeling, warehousing concepts, and More ❯