south west london, south east england, United Kingdom
Search 5.0
shape something from the ground up — this is for you. What you’ll do: Design and build a cloud-native data warehouse Develop scalable ETL/ELT pipelines and dimensional models (Kimball, Data Vault, etc.) Integrate multiple data sources (cloud & on-prem) Ensure high data quality, performance and reliability Collaborate More ❯
shape something from the ground up — this is for you. What you’ll do: Design and build a cloud-native data warehouse Develop scalable ETL/ELT pipelines and dimensional models (Kimball, Data Vault, etc.) Integrate multiple data sources (cloud & on-prem) Ensure high data quality, performance and reliability Collaborate More ❯
with engineers to integrate data best practices across teams Champion data quality, governance, and documentation Key Requirements: Strong experience with Python, SQL, and modern ETL tools (e.g., Airflow, dbt) Solid grasp of cloud platforms (AWS/GCP/Azure) and data warehouses (e.g., BigQuery, Snowflake) Familiarity with streaming technologies (Kafka More ❯
with engineers to integrate data best practices across teams Champion data quality, governance, and documentation Key Requirements: Strong experience with Python, SQL, and modern ETL tools (e.g., Airflow, dbt) Solid grasp of cloud platforms (AWS/GCP/Azure) and data warehouses (e.g., BigQuery, Snowflake) Familiarity with streaming technologies (Kafka More ❯
experience with 5+ years in a Sales Engineering capacity. You have strong technical skills and professional experience in: web and mobile app development, SQL, ETL or data pipelines, and data analysis. You have experience with cloud data warehouses/lakes including Snowflake, Databricks, BigQuery, Redshift, S3, and ADLS. You have More ❯
experience with 5+ years in a Sales Engineering capacity. You have strong technical skills and professional experience in: web and mobile app development, SQL, ETL or data pipelines, and data analysis. You have experience with cloud data warehouses/lakes including Snowflake, Databricks, BigQuery, Redshift, S3, and ADLS. You have More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Block MB
with engineers to integrate data best practices across teams Champion data quality, governance, and documentation Key Requirements: Strong experience with Python, SQL, and modern ETL tools (e.g., Airflow, dbt) Solid grasp of cloud platforms (AWS/GCP/Azure) and data warehouses (e.g., BigQuery, Snowflake) Familiarity with streaming technologies (Kafka More ❯
especially in AWS and open-source data tech. What you bring: Expert-level SQL and strong scripting (Python/DBT). Deep knowledge of ETL/ELT processes, CI/CD, and data performance tuning. Strong communication skills, with a knack for translating complex tech into clear business insights. Proven More ❯
especially in AWS and open-source data tech. What you bring: Expert-level SQL and strong scripting (Python/DBT). Deep knowledge of ETL/ELT processes, CI/CD, and data performance tuning. Strong communication skills, with a knack for translating complex tech into clear business insights. Proven More ❯
data sets, produce dashboards and drive actionable insights. SQL Development: Write and optimise complex Microsoft SQL Server queries for data extraction, transformation and loading (ETL). Data Governance: Implement master data management and governance policies to maintain data quality, compliance and lineage. Stakeholder Management: Communicate effectively with project managers andMore ❯
south west london, south east england, United Kingdom
Airswift
data sets, produce dashboards and drive actionable insights. SQL Development: Write and optimise complex Microsoft SQL Server queries for data extraction, transformation and loading (ETL). Data Governance: Implement master data management and governance policies to maintain data quality, compliance and lineage. Stakeholder Management: Communicate effectively with project managers andMore ❯
Experience working with cloud platforms (AWS preferred) and Infrastructure-as-Code tools. Solid understanding of relational databases and SQL. Proven track record building robust ETL pipelines , ideally using Airflow or a similar tool. Familiarity with best practices in software engineering: version control, testing, packaging, and code reviews. Quantitative problem-solving More ❯
Experience working with cloud platforms (AWS preferred) and Infrastructure-as-Code tools. Solid understanding of relational databases and SQL. Proven track record building robust ETL pipelines , ideally using Airflow or a similar tool. Familiarity with best practices in software engineering: version control, testing, packaging, and code reviews. Quantitative problem-solving More ❯
Experience working with cloud platforms (AWS preferred) and Infrastructure-as-Code tools. Solid understanding of relational databases and SQL. Proven track record building robust ETL pipelines , ideally using Airflow or a similar tool. Familiarity with best practices in software engineering: version control, testing, packaging, and code reviews. Quantitative problem-solving More ❯
west london, south east england, United Kingdom Hybrid / WFH Options
Ocho
of opportunities for ownership and innovation. Key Responsibilities Design and deploy cloud-based data platforms using Snowflake, DBT, and related tools Develop performant, scalable ETL/ELT pipelines across varied data sources Build and maintain dimensional models using Kimball, Data Vault, or Data Mesh methodologies Collaborate with cross-functional teams More ❯
south west london, south east england, United Kingdom Hybrid / WFH Options
Ocho
of opportunities for ownership and innovation. Key Responsibilities Design and deploy cloud-based data platforms using Snowflake, DBT, and related tools Develop performant, scalable ETL/ELT pipelines across varied data sources Build and maintain dimensional models using Kimball, Data Vault, or Data Mesh methodologies Collaborate with cross-functional teams More ❯
pipelines using Python and PySpark , enabling powerful analytics and smarter business decisions across the organisation. What You'll Be Doing Design and build scalable ETL/ELT data pipelines using Python and PySpark Lead and support data migration initiatives across legacy and cloud-based platforms Collaborate with analysts, data scientists More ❯
Migration: Migrate data from existing Big Data platforms into Dynamics 365, Data Verse and other Azure data environments. Migrating from hand-rolled Code-based ETL processes to modern Azure Cloud Computing solutions Data Aggregation: Aggregate from smaller data sources and SaaS platforms into our Microsoft data environments Azure Development: Utilize More ❯
generative AI tools. Knowledge of AWS services such as EC2, DynamoDB, S3, Redshift, and Aurora. Experience developing scorecards and dashboards. Experience with third-party ETL tools like IBM DataStage or Informatica. We are committed to an inclusive culture that empowers all Amazonians. If you require workplace accommodations during the application More ❯
capacities. Key Responsibilities: • Design, develop, and deploy data pipelines using AWS tools such as S3, Redshift, Glue, Lambda, Step Functions, and DynamoDB • Build robust ETL/ELT solutions with Matillion for cloud data warehousing • Collaborate with client stakeholders to refine data needs and deliver production-ready solutions • Contribute to infrastructure More ❯
capacities. Key Responsibilities: • Design, develop, and deploy data pipelines using AWS tools such as S3, Redshift, Glue, Lambda, Step Functions, and DynamoDB • Build robust ETL/ELT solutions with Matillion for cloud data warehousing • Collaborate with client stakeholders to refine data needs and deliver production-ready solutions • Contribute to infrastructure More ❯
and S3 Strong consulting experience - strong stakeholder management and experience leading large teams Heavy involvement in RFI + RFPs Proficiency in data integration/ETL development, including ELT patterns and hands-on experience with Matillion Skilled in handling structured and unstructured data (JSON, XML, Parquet, etc.) Comfortable working in Linux More ❯
general business stakeholders to gather requirements and ensure solutions are fit-for-purpose. This role would be well-suited to a seasoned SQL/ETL Developer who is looking for an opportunity to take real ownership of their work in a demanding and dynamic environment - making a lasting and valuable More ❯
requirement and design documents, working backward from customer needs BASIC QUALIFICATIONS - 5+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - Experience mentoring team members More ❯
on Databricks Analysis via Python Jupyter notebooks Pyspark in Databricks workflows for heavy lifting Streamlit and Python for dashboarding Airflow DAGs with Python for ETL running on Kubernetes and Docker Django for custom app/database development Kubernetes for container management, with Grafana/Prometheus for monitoring Hugo/Markdown More ❯