GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform More ❯
data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and Platform Engineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability … of data pipelines. Database Engineering: Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models for unified access and processing. Data Governance & Stewardship: Implement robust data governance , access control , and stewardship policies aligned More ❯
data enrichment and correlation across primary, secondary, and tertiary sources. Cloud, Infrastructure, and Platform Engineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to ensure scalability, resilience, and observability … of data pipelines. Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models for unified access and processing. Data Governance & Stewardship: Implement robust data governance , access control , and stewardship policies aligned with compliance More ❯
requirements. Design and develop scalable AWS architectures for API-based and data-centric applications. Define data pipelines, ETL processes, and storage solutions using AWS services such as S3, OpenSearch, Redshift, Step Functions, Lambda, Glue, and Athena. Architect RESTful APIs, ensuring security, performance, and scalability. Optimise microservices architecture and API management strategies, leveraging tools such as KONG Gateway, Lambda, and … with a strong focus on AWS cloud services. Expertise in API design, microservices architecture, and cloud-native development. Hands-on experience with AWS services including EKS, Lambda, DynamoDB, S3, Redshift, RDS, Glue, Athena. Strong knowledge of serverless architectures, event-driven patterns, and containerization. Experience designing and implementing secure, scalable, and high-availability architectures. Solid understanding of networking, security, authentication More ❯
Proficiency in SQL, Python, and ETL tools ( Streamsets, DBT etc.) >Hands on experience with Oracle RDBMS >Data Migration experience to Snowflake >Experience with AWS services such as S3, Lambda, Redshift, and Glue. >Strong understanding of data warehousing concepts and data modeling. >Excellent problem-solving and communication skills, with a focus on delivering high-quality solutions. >Understanding/hands on More ❯
Proficiency in SQL, Python, and ETL tools (Streamsets, DBT etc.) Hands on experience with Oracle RDBMS Data Migration experience to Snowflake Experience with AWS services such as S3, Lambda, Redshift, and Glue. Strong understanding of data warehousing concepts and data modeling. Excellent problem-solving and communication skills, with a focus on delivering high-quality solutions. Understanding/hands on More ❯
Royal Leamington Spa, England, United Kingdom Hybrid / WFH Options
Verisk
requirements. Design and develop scalable AWS architectures for API-based and data-centric applications. Define data pipelines, ETL processes, and storage solutions using AWS services such as S3, OpenSearch, Redshift, Step Functions, Lambda, Glue, and Athena. Architect RESTful APIs, ensuring security, performance, and scalability. Optimise microservices architecture and API management strategies, leveraging tools such as KONG Gateway, Lambda, and … with a strong focus on AWS cloud services. Expertise in API design, microservices architecture, and cloud-native development. Hands-on experience with AWS services including EKS, Lambda, DynamoDB, S3, Redshift, RDS, Glue, Athena. Strong knowledge of serverless architectures, event-driven patterns, and containerization. Experience designing and implementing secure, scalable, and high-availability architectures. Solid understanding of networking, security, authentication More ❯
data models from the bronze layer upwards, promoting self-service analytics and data literacy. Technical Leadership & Excellence: Act as a subject matter expert in SQL (Postgres, Cloud SQL, BigQuery, Redshift), driving performance optimization and complex query development. Drive the adoption of best practices for dbt development, including modularity, testing, and documentation, across the team. Influence the selection and implementation … robust data structures. Exceptional knowledge and extensive experience with dbt for designing, building, and optimizing complex enterprise-level data models and transformations. Deep experience with cloud data warehouses (BigQuery, Redshift), including performance tuning and cost optimization. Strong proficiency with workflow orchestration tools like Airflow, capable of designing and implementing complex, production-grade DAGs. Extensive experience with multi-cloud environments More ❯
Proficiency in SQL, Python, and ETL tools (Streamsets, DBT etc.) Hands on experience with Oracle RDBMS Data Migration experience to Snowflake Experience with AWS services such as S3, Lambda, Redshift, and Glue. Strong understanding of data warehousing concepts and data modeling. Excellent problem-solving and communication skills, with a focus on delivering high-quality solutions. Understanding/hands on More ❯
and modern data engineering tooling About You: Proven experience as a Data Engineer working with Snowflake in a production environment Strong cloud expertise, especially within AWS (S3, Glue, Lambda, Redshift, IAM, etc.) Proficient in SQL, Python, and tools like dbt, Airflow, or similar Experience optimising cost, performance, and scalability across large Snowflake environments Excellent communicator with a proactive, ownership More ❯
and modern data engineering tooling About You: Proven experience as a Data Engineer working with Snowflake in a production environment Strong cloud expertise, especially within AWS (S3, Glue, Lambda, Redshift, IAM, etc.) Proficient in SQL, Python, and tools like dbt, Airflow, or similar Experience optimising cost, performance, and scalability across large Snowflake environments Excellent communicator with a proactive, ownership More ❯
real ownership You’ll need: Strong Python and SQL Solid AWS experience (Glue, Lambda, SQS, S3, etc.) Experience building/maintaining ETL pipelines Data modelling and warehousing knowledge (PostgreSQL, Redshift, Snowflake etc.) Self-starter mindset, you’ll get stuck in and find answers independently to bring back to the team Drive to want to own projects and put forwards More ❯
or Power Designer Experience with data ingestion (both batch and streaming), CI/CD tooling (e.g. Azure DevOps, Terraform etc.) and interrogation with databases such as SQL Server, Oracle, Redshift etc. Experience developing solutions on any major cloud platform: Azure, AWS or GCP Experience with reporting tools such as Power BI, Tableau, Qlik etc. Excellent communication skills with a More ❯
City of London, London, England, United Kingdom Hybrid / WFH Options
Avanti
a data engineering role, ideally in a cloud-native environment Strong programming skills in SQL and Python for data transformation and workflow automation Experience with AWS data tools (e.g. Redshift, Glue, Lambda, S3 ) and infrastructure tools such as Terraform Understanding of data modelling concepts (e.g. dimensional models, star/snowflake schemas) Knowledge of data quality, access controls , and compliance More ❯
ELT processes, automation Technical Requirements: - Strong proficiency in SQL and Python programming - Extensive experience with data modeling and data warehouse concepts - Advanced knowledge of AWS data services, including: S3, Redshift, AWS Glue, AWS Lambda - Experience with Infrastructure as Code using AWS CDK - Proficiency in ETL/ELT processes and best practices - Experience with data visualization tools (Quicksight) Required Skills More ❯
ELT processes, automation Technical Requirements: - Strong proficiency in SQL and Python programming - Extensive experience with data modeling and data warehouse concepts - Advanced knowledge of AWS data services, including: S3, Redshift, AWS Glue, AWS Lambda - Experience with Infrastructure as Code using AWS CDK - Proficiency in ETL/ELT processes and best practices - Experience with data visualization tools (Quicksight) Required Skills More ❯
ELT processes, automation Technical Requirements: - Strong proficiency in SQL and Python programming - Extensive experience with data modeling and data warehouse concepts - Advanced knowledge of AWS data services, including: S3, Redshift, AWS Glue, AWS Lambda - Experience with Infrastructure as Code using AWS CDK - Proficiency in ETL/ELT processes and best practices - Experience with data visualization tools (Quicksight) Required Skills More ❯
ELT processes, automation Technical Requirements: - Strong proficiency in SQL and Python programming - Extensive experience with data modeling and data warehouse concepts - Advanced knowledge of AWS data services, including: S3, Redshift, AWS Glue, AWS Lambda - Experience with Infrastructure as Code using AWS CDK - Proficiency in ETL/ELT processes and best practices - Experience with data visualization tools (Quicksight) Required Skills More ❯
Employment Type: Contract
Rate: £350 - £400/day PTO, pension and national insurance
Proficiency in SQL, Python, and ETL tools (Streamsets, DBT etc.) Hands on experience with Oracle RDBMS Data Migration experience to Snowflake Experience with AWS services such as S3, Lambda, Redshift, and Glue. Strong understanding of data warehousing concepts and data modeling. Excellent problem-solving and communication skills, with a focus on delivering high-quality solutions. Understanding/hands on More ❯
Python or similar scripting language for test automation Experience with cloud platforms (AWS, GCP, or Azure), especially in data-related services Familiarity with data warehousing concepts (e.g., Snowflake, BigQuery, Redshift) Strong understanding of data governance, data profiling, and quality metrics Excellent problem-solving and communication skills Ability to work independently and as part of a distributed team Nice to More ❯
Python or similar scripting language for test automation Experience with cloud platforms (AWS, GCP, or Azure), especially in data-related services Familiarity with data warehousing concepts (e.g., Snowflake, BigQuery, Redshift) Strong understanding of data governance, data profiling, and quality metrics Excellent problem-solving and communication skills Ability to work independently and as part of a distributed team Nice to More ❯
experience in building and managing data transformations with dbt, with experience in optimising complex transformations and documentation. Hands-on experience with popular Cloud data-warehouses such as Snowflake or Redshift, including knowledge of performance optimisation, data modeling, and query tuning. Highly Proficient in data analysis tools and languages (e.g., SQL, Python). Strong understanding of data modeling principles and More ❯
ELT processes, automation Technical Requirements: - Strong proficiency in SQL and Python programming - Extensive experience with data modeling and data warehouse concepts - Advanced knowledge of AWS data services, including: S3, Redshift, AWS Glue, AWS Lambda - Experience with Infrastructure as Code using AWS CDK - Proficiency in ETL/ELT processes and best practices - Experience with data visualization tools (Quicksight) Required Skills More ❯
London, England, United Kingdom Hybrid / WFH Options
ZILO™
roles Strong proficiency in SQL, ETL processes and database management systems (e.g.,MySQL, PostgreSQL, MongoDB) Hands-on experience with AWS services for data processing, storage and analysis(e.g., S3, Redshift, EMR, Glue) Familiarity with programming languages such as Python or Java Understanding of data warehousing concepts and data modeling techniques Experience working with big data technologies (e.g., Hadoop, Spark More ❯