data pipelines. These pipelines ingest and transform data from diverse sources (e.g., email, CSV, ODBC/JDBC, JSON, XML, Excel, Avro, Parquet) using AWS technologies such as S3, Athena, Redshift, Glue, and programming languages like Python and Java (Docker/Spring) . What You’ll Do Lead the design and development of scalable data pipelines and ETL processes Collaborate … UK for at least 5 years. What We’re Looking for Proven experience leading data engineering teams and delivering technical solutions Strong background in cloud data platforms, especially AWS (Redshift, Athena, EC2, IAM, Lambda, CloudWatch) Proficiency in automation tools and languages (e.g., GitHub/GitLab, Python, Java). Skilled in stakeholder engagement and translating requirements into actionable insights. Ability More ❯
Penryn, England, United Kingdom Hybrid / WFH Options
Aspia Space
including geospatial data—for training our large-scale AI models. Key Responsibilities: •Architect, design, and manage scalable data pipelines and infrastructure across on-premise and cloud environments (AWS S3, Redshift, Glue, Step Functions). •Ingest, clean, wrangle, and preprocess large, diverse, and often messy datasets—including structured, unstructured, and geospatial data. •Collaborate with ML and research teams to ensure … experience in data engineering, data architecture, or similar roles. •Expert proficiency in Python, including popular data libraries (Pandas, PySpark, NumPy, etc.). •Strong experience with AWS services—specifically S3, Redshift, Glue (Athena a plus). •Solid understanding of applied statistics. •Hands-on experience with large-scale datasets and distributed systems. •Experience working across hybrid environments: on-premise HPCs and More ❯
unit tests using Java, React, plsql. Design, develop, and maintain efficient ETL pipelines using AWS Glue for data extraction, transformation, and loading across multiple sources and destinations (e.g., S3, Redshift, RDS). Write complex scripts within AWS Glue to handle custom data transformations, business rules, and data cleaning tasks. Configure and manage Glue Crawlers for data cataloging and schema … frameworks such as REST, JavaScript ES6, Typescript, JSON, Java, Python, RDBMS, ORM. 2+ years working with AWS Glue for ETL development. Experience with AWS data services such as S3, Redshift, RDS, and Lambda. 2+ years estimating, planning, and executing complex projects using Agile methodologies. Experience with SonarQube Experience with code version tools such as GitHub. Experience with HTTP and More ❯
Newcastle Upon Tyne, England, United Kingdom Hybrid / WFH Options
Delaney & Bourton
selection, cost management and team management Experience required: Experience in building and scaling BI and Data Architecture Expertise in modern BI and Data DW platforms such as Snowflake, BigQuery, Redshift, Power BI etc Background in ETL/ELT tooling and Data Pipelines such as DBT, Fivetran, Airflow Experienced in Cloud based solutions (Azure, AWS or Google More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
selection, cost management, and team management. Experience required: Experience in building and scaling BI and Data Architecture. Expertise in modern BI and Data DW platforms such as Snowflake, BigQuery, Redshift, Power BI, etc. Background in ETL/ELT tooling and Data Pipelines such as DBT, Fivetran, Airflow. Experience with Cloud-based solutions (Azure, AWS, or Google). #J More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
Ignite Digital Talent
Strong hands-on experience with Python in a data context Proven skills in SQL Experience with Data Warehousing (DWH) ideally with Snowflake or similar cloud data platforms (Databricks or Redshift) Experience with DBT, Kafka, Airflow, and modern ELT/ETL frameworks Familiarity with data visualisation tools like Sisense, Looker, or Tableau Solid understanding of data architecture, transformation workflows, and More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Ignite Digital Talent
Strong hands-on experience with Python in a data context Proven skills in SQL Experience with Data Warehousing (DWH) ideally with Snowflake or similar cloud data platforms (Databricks or Redshift) Experience with DBT, Kafka, Airflow, and modern ELT/ETL frameworks Familiarity with data visualisation tools like Sisense, Looker, or Tableau Solid understanding of data architecture, transformation workflows, and More ❯
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
JR United Kingdom
data architecture. Work with technologies such as Python, Java, Scala, Spark, and SQL to extract, clean, transform, and integrate data. Build scalable solutions using AWS services like EMR, Glue, Redshift, Kinesis, Lambda, and DynamoDB. Process large volumes of structured and unstructured data, integrating multiple sources to create efficient data pipelines. Collaborate with engineering teams to integrate data solutions into More ❯
London, England, United Kingdom Hybrid / WFH Options
Ignite Digital Talent
Strong hands-on experience with Python in a data context Proven skills in SQL Experience with Data Warehousing (DWH) ideally with Snowflake or similar cloud data platforms (Databricks or Redshift) Experience with DBT, Kafka, Airflow, and modern ELT/ETL frameworks Familiarity with data visualisation tools like Sisense, Looker, or Tableau Solid understanding of data architecture, transformation workflows, and More ❯
London, England, United Kingdom Hybrid / WFH Options
Builder.ai
new approaches. Extensive software engineering experience with Python (no data science background required). Experience with production microservices (Docker/Kubernetes) and cloud infrastructure. Knowledge of databases like Postgres, Redshift, Neo4j is a plus. Why You Should Join This role sits at the intersection of data science and DevOps. You will support data scientists, design, deploy, and maintain microservices More ❯
technologies. Develops and maintains scalable cloud-based data infrastructure, ensuring alignment with the organization's decentralized data management strategy. Designs and implements ETL pipelines using AWS services (e.g., S3, Redshift, Glue, Lake Formation, Lambda) to support data domain requirements and self-service analytics. Collaborates with data domain teams to design and deploy domain-specific data products, adhering to organizational More ❯
catch anomalies early in both pipelines and the data warehouse. Continuously enhance data processes and infrastructure. Must-Have Requirements Strong SQL skills and experience with cloud-based databases like Redshift and AWS RDS . Solid Python knowledge, including packages for analytics, data transformation, APIs, and ML/AI. Proven experience building and maintaining ETLs/ELTs, ideally using dbt More ❯
or more years of AWS experience using one or more of the associated services - S3, EMR, Glue Jobs, Lambda, Aurora, CloudTrail, SNS, SQS, CloudWatch Experience with Databases such as Redshift, PostgreSQL, SQL Server, Oracle Experience using REST APIs using Glue and/or Lambdas Eight (8) or more years designing, building, and maintaining enterprise-scale databases Eight (8) or More ❯
Spotfire. Shape schema design, enrich metadata, and develop APIs for reliable and flexible data access. Optimize storage and compute performance across data lakes and warehouses (e.g., Delta Lake, Parquet, Redshift). Document data contracts, pipeline logic, and operational best practices to ensure long-term sustainability and effective collaboration. Required Qualifications Demonstrated experience as a data engineer in biopharmaceutical or More ❯
London, England, United Kingdom Hybrid / WFH Options
Prolific
hands on experience deploying production quality code with proficiency in Python for data processing and related packages. Data Infrastructure Knowledge : Deep understanding of SQL and analytical data warehouses (Snowflake, Redshift preferred) with proven experience implementing ETL/ELT best practices at scale. Pipeline Management : Hands on experience with data pipeline tools (Airflow, dbt) and strong ability to optimise for More ❯
new technologies essential for automating models and advancing our engineering practices. You're familiar with cloud technologies . You have experience working with data in a cloud data warehouse (Redshift, Snowflake, Databricks, or BigQuery) Experience with a modern data modeling technology (DBT) You document and communicate clearly . Some experience with technical content writing would be a plus You More ❯
business problems. Comfort with rapid prototyping and disciplined software development processes. Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.), data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras). Demonstrated ability to work on multi-disciplinary teams with diverse skillsets. Deploying machine learning models and systems More ❯
performing data analytics on AWS platforms Experience in writing efficient SQL's, implementing complex ETL transformations on big data platform. Experience in a Big Data technologies (Spark, Impala, Hive, Redshift, Kafka, etc.) Experience in data quality testing; adept at writing test cases and scripts, presenting and resolving data issues Experience with Databricks, Snowflake, Iceberg are required Preferred qualifications, capabilities More ❯
. Strong familiarity with data warehousing, data lake/lakehouse architectures, and cloud-native analytics platforms. Hands-on experience with SQL and cloud data platforms (e.g., Snowflake, Azure, AWS Redshift, GCP BigQuery). Experience with BI/analytics tools (e.g., Power BI, Tableau) and data visualization best practices. Strong knowledge of data governance, data privacy, and compliance frameworks (e.g. More ❯
and manage DBT models for data transformation and modeling in a modern data stack. Proficiency in SQL , Python , and PySpark . Experience with AWS services such as S3, Athena, Redshift, Lambda, and CloudWatch. Familiarity with data warehousing concepts and modern data stack architectures. Experience with CI/CD pipelines and version control (e.g., Git). Collaborate with data analysts More ❯
and manage DBT models for data transformation and modeling in a modern data stack. Proficiency in SQL , Python , and PySpark . Experience with AWS services such as S3, Athena, Redshift, Lambda, and CloudWatch. Familiarity with data warehousing concepts and modern data stack architectures. Experience with CI/CD pipelines and version control (e.g., Git). Collaborate with data analysts More ❯
if you have 4+ years of relevant work experience in Analytics, Business Intelligence, or Technical Operations Master in SQL, Python, and ETL using big data tools (HIVE/Presto, Redshift) Previous experience with web frameworks for Python such as Django/Flask is a plus Experience writing data pipelines using Airflow Fluency in Looker and/or Tableau Strong More ❯
business problems. Comfort with rapid prototyping and disciplined software development processes. Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.), data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras). Demonstrated ability to work on multi-disciplinary teams with diverse skillsets. Deploying machine learning models and systems More ❯
if you have 4+ years of relevant work experience in Analytics, Business Intelligence, or Technical Operations Master in SQL, Python, and ETL using big data tools (HIVE/Presto, Redshift) Previous experience with web frameworks for Python such as Django/Flask is a plus Experience writing data pipelines using Airflow Fluency in Looker and/or Tableau Strong More ❯
if you have 4+ years of relevant work experience in Analytics, Business Intelligence, or Technical Operations Master in SQL, Python, and ETL using big data tools (HIVE/Presto, Redshift) Previous experience with web frameworks for Python such as Django/Flask is a plus Experience writing data pipelines using Airflow Fluency in Looker and/or Tableau Strong More ❯