work on fast-moving, critical projects, contributing to design decisions. Proven project delivery and advanced data validation experience are advantageous. Key Skills & Experience: Proficiency with database systems such as Amazon Aurora MySQL, Microsoft SQL, Oracle, DynamoDB, etc. Expertise in designing, developing, and maintaining complex SQL queries and stored procedures. Performance tuning and optimization of data pipelines. Implementing data monitoring … solutions. Developing scripts and automation tools for data management. Knowledge of version control systems like Git. Familiarity with AWS services: DMS, Lambda, S3, Step Functions, CloudWatch, Redshift, Glue, EventBridge. Experience with CI/CD pipelines and repositories (e.g., Bitbucket). Automated testing design and implementation. Experience with large-scale data environments and warehousing concepts. Data ingestion and integration experience. More ❯
best" talent and have a diverse workforce. Your role will include: Design, develop, test, and deploy data integration processes (batch or real-time) in AWS using tools such as Redshift, S3, Glue, Athena, Lambda and Snowflake. Building pipelines, doing migrations, integration's with external systems. Solving simple and complex problems. Liaising with stakeholders across the business as an internal … Warehouse . They are looking for a candidate that has experience in... AWS Data Platform, Strong knowledge of Snowflake, S3, Lambada, Data Modelling, DevOps Practices, Ariflow, DBT, Data Vault, Redshift, ODS, Data Vault experience, Strong SQL/Python. This role is an urgent requirement, there are limited interview slots left, if interested send an up to date CV to More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Anson McCade
It suits someone who thrives in greenfield environments, enjoys client engagement, and values clean, scalable, well-documented engineering. Key Responsibilities: Design and build robust data pipelines using AWS (S3, Redshift, Glue, Lambda, Step Functions, DynamoDB). Deliver ETL/ELT solutions with Matillion and related tooling. Work closely with client teams to define requirements and hand over production-ready … solutions. Own infrastructure and deployment via CI/CD and IaC best practices. Contribute to technical strategy and mentor junior engineers. Requirements: Strong hands-on AWS experience – S3, Redshift, Glue essential. Proven experience building ETL/ELT pipelines in cloud environments. Proficient in working with structured/unstructured data (JSON, XML, CSV, Parquet). Skilled in working with relational More ❯
It suits someone who thrives in greenfield environments, enjoys client engagement, and values clean, scalable, well-documented engineering. Key Responsibilities: Design and build robust data pipelines using AWS (S3, Redshift, Glue, Lambda, Step Functions, DynamoDB). Deliver ETL/ELT solutions with Matillion and related tooling. Work closely with client teams to define requirements and hand over production-ready … solutions. Own infrastructure and deployment via CI/CD and IaC best practices. Contribute to technical strategy and mentor junior engineers. Requirements: Strong hands-on AWS experience – S3, Redshift, Glue essential. Proven experience building ETL/ELT pipelines in cloud environments. Proficient in working with structured/unstructured data (JSON, XML, CSV, Parquet). Skilled in working with relational More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform, StackDriver or More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform, StackDriver or More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform, StackDriver or More ❯
GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions … experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for Azure/AWS, Jenkins, Terraform, StackDriver or More ❯
Penryn, England, United Kingdom Hybrid / WFH Options
Aspia Space
including geospatial data—for training our large-scale AI models. Key Responsibilities: •Architect, design, and manage scalable data pipelines and infrastructure across on-premise and cloud environments (AWS S3, Redshift, Glue, Step Functions). •Ingest, clean, wrangle, and preprocess large, diverse, and often messy datasets—including structured, unstructured, and geospatial data. •Collaborate with ML and research teams to ensure … experience in data engineering, data architecture, or similar roles. •Expert proficiency in Python, including popular data libraries (Pandas, PySpark, NumPy, etc.). •Strong experience with AWS services—specifically S3, Redshift, Glue (Athena a plus). •Solid understanding of applied statistics. •Hands-on experience with large-scale datasets and distributed systems. •Experience working across hybrid environments: on-premise HPCs and More ❯
London, England, United Kingdom Hybrid / WFH Options
Harnham
decisions and mentor junior engineers Collaborate across engineering, data science, and product teams to deliver business impact Skills & Experience: Expert in SQL , dbt , and cloud data warehouses (e.g., BigQuery, Redshift) Strong experience with Airflow , Python , and multi-cloud environments (AWS/GCP) Proven background in designing and scaling analytics solutions in agile environments Proven experience as an Analytics Engineer More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
decisions and mentor junior engineers Collaborate across engineering, data science, and product teams to deliver business impact Skills & Experience: Expert in SQL , dbt , and cloud data warehouses (e.g., BigQuery, Redshift) Strong experience with Airflow , Python , and multi-cloud environments (AWS/GCP) Proven background in designing and scaling analytics solutions in agile environments Proven experience as an Analytics Engineer More ❯
have requirements: Driven self-starter mentality, with the ability to work independently Python SQL experience building and maintaining ETL/data pipelines expertise in data warehousing - any of BigQuery, Redshift, Snowflake or Databricks is fine experience working with cloud infrastructures - AWS and/or GCP being most advantageous 👍 Bonus points for experience with: Airflow RudderStack, Expo and/or More ❯
have requirements: Driven self-starter mentality, with the ability to work independently Python SQL experience building and maintaining ETL/data pipelines expertise in data warehousing - any of BigQuery, Redshift, Snowflake or Databricks is fine experience working with cloud infrastructures - AWS and/or GCP being most advantageous 👍 Bonus points for experience with: Airflow RudderStack, Expo and/or More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
selection, cost management, and team management. Experience required: Experience in building and scaling BI and Data Architecture. Expertise in modern BI and Data DW platforms such as Snowflake, BigQuery, Redshift, Power BI, etc. Background in ETL/ELT tooling and Data Pipelines such as DBT, Fivetran, Airflow. Experience with Cloud-based solutions (Azure, AWS, or Google). #J More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
Ignite Digital Talent
Strong hands-on experience with Python in a data context Proven skills in SQL Experience with Data Warehousing (DWH) ideally with Snowflake or similar cloud data platforms (Databricks or Redshift) Experience with DBT, Kafka, Airflow, and modern ELT/ETL frameworks Familiarity with data visualisation tools like Sisense, Looker, or Tableau Solid understanding of data architecture, transformation workflows, and More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Client Server Ltd
have strong Python and SQL coding skills You have experience with big data frameworks and tools including Spark You have a good knowledge of AWS data services (e.g. S3, Redshift, EMR, Glue) You have strong analytical, problem solving and critical thinking skills You have excellent communication skills and experience of working across teams What's in it for you More ❯
Ops experience is essential Proficiency in SQL and Python or similar languages for data analysis Experience with AWS and Azure cloud engineering Experience with cloud data platforms like AWS Redshift or Azure Synapse and automated Terraform deployment Strong analytical and problem-solving skills Experience with data visualization tools such as Tableau, Power BI, and Excel Excellent communication and collaboration More ❯
London, England, United Kingdom Hybrid / WFH Options
Ignite Digital Talent
Strong hands-on experience with Python in a data context Proven skills in SQL Experience with Data Warehousing (DWH) ideally with Snowflake or similar cloud data platforms (Databricks or Redshift) Experience with DBT, Kafka, Airflow, and modern ELT/ETL frameworks Familiarity with data visualisation tools like Sisense, Looker, or Tableau Solid understanding of data architecture, transformation workflows, and More ❯
Spotfire. Shape schema design, enrich metadata, and develop APIs for reliable and flexible data access. Optimize storage and compute performance across data lakes and warehouses (e.g., Delta Lake, Parquet, Redshift). Document data contracts, pipeline logic, and operational best practices to ensure long-term sustainability and effective collaboration. Required Qualifications Demonstrated experience as a data engineer in biopharmaceutical or More ❯
London, England, United Kingdom Hybrid / WFH Options
Prolific - UK Job Board?
hands on experience deploying production quality code with proficiency in Python for data processing and related packages. Data Infrastructure Knowledge : Deep understanding of SQL and analytical data warehouses (Snowflake, Redshift preferred) with proven experience implementing ETL/ELT best practices at scale. Pipeline Management : Hands on experience with data pipeline tools (Airflow, dbt) and strong ability to optimise for More ❯
new technologies essential for automating models and advancing our engineering practices. You're familiar with cloud technologies . You have experience working with data in a cloud data warehouse (Redshift, Snowflake, Databricks, or BigQuery) Experience with a modern data modeling technology (DBT) You document and communicate clearly . Some experience with technical content writing would be a plus You More ❯
/Data Engineering/BI Engineering experience Understanding of data warehousing, data modelling concepts and structuring new data tables Knowledge of cloud-based MPP data warehousing (e.g. Snowflake, BigQuery, Redshift) Nice to have Experience developing in a BI tool (Looker or similar) Good practical understanding of version control SQL ETL/ELT knowledge, experience with DAGs to manage script More ❯
. Strong familiarity with data warehousing, data lake/lakehouse architectures, and cloud-native analytics platforms. Hands-on experience with SQL and cloud data platforms (e.g., Snowflake, Azure, AWS Redshift, GCP BigQuery). Experience with BI/analytics tools (e.g., Power BI, Tableau) and data visualization best practices. Strong knowledge of data governance, data privacy, and compliance frameworks (e.g. More ❯
and manage DBT models for data transformation and modeling in a modern data stack. Proficiency in SQL , Python , and PySpark . Experience with AWS services such as S3, Athena, Redshift, Lambda, and CloudWatch. Familiarity with data warehousing concepts and modern data stack architectures. Experience with CI/CD pipelines and version control (e.g., Git). Collaborate with data analysts More ❯
and manage DBT models for data transformation and modeling in a modern data stack. Proficiency in SQL , Python , and PySpark . Experience with AWS services such as S3, Athena, Redshift, Lambda, and CloudWatch. Familiarity with data warehousing concepts and modern data stack architectures. Experience with CI/CD pipelines and version control (e.g., Git). Collaborate with data analysts More ❯