data platform, including data pipelines, orchestration and modelling. Lead the team in building and maintaining robust data pipelines, data models, and infrastructure using tools such as Airflow, AWS Redshift, DBT and Looker.Ensuring the team follows agile methodologies to improve delivery cadence and responsiveness. Contribute to hands-on coding, particularly in areas requiring architectural input, prototyping, or critical delivery support. Support More ❯
data platform, including data pipelines, orchestration and modelling. Lead the team in building and maintaining robust data pipelines, data models, and infrastructure using tools such as Airflow, AWS Redshift, DBT and Looker.Ensuring the team follows agile methodologies to improve delivery cadence and responsiveness. Contribute to hands-on coding, particularly in areas requiring architectural input, prototyping, or critical delivery support. Support More ❯
of regulatory reporting and treasury operations in retail banking Exposure to Python, Go or similar languages. Experience working with orchestration frameworks such as Airflow/Luigi Have previously used dbt, dataform or similar tooling. Used to AGILE ways of working (Kanban, Scrum) The Interview Process: Our interview process involves 3 main stages: 30 minute recruiter call 45 minute call with More ❯
of regulatory reporting and treasury operations in retail banking Exposure to Python, Go or similar languages. Experience working with orchestration frameworks such as Airflow/Luigi Have previously used dbt, dataform or similar tooling. Used to AGILE ways of working (Kanban, Scrum) The Interview Process: Our interview process involves 3 main stages: 30 minute recruiter call 45 minute call with More ❯
of regulatory reporting and treasury operations in retail banking Exposure to Python, Go or similar languages. Experience working with orchestration frameworks such as Airflow/Luigi Have previously used dbt, dataform or similar tooling. Used to AGILE ways of working (Kanban, Scrum) The Interview Process: Our interview process involves 3 main stages: 30 minute recruiter call 45 minute call with More ❯
Cardiff, South Glamorgan, United Kingdom Hybrid / WFH Options
Monzo
of regulatory reporting and treasury operations in retail banking Exposure to Python, Go or similar languages. Experience working with orchestration frameworks such as Airflow/Luigi Have previously used dbt, dataform or similar tooling. Used to AGILE ways of working (Kanban, Scrum) The Interview Process: Our interview process involves 3 main stages: 30 minute recruiter call 45 minute call with More ❯
our data infrastructure. This position will focus on migrating existing data pipelines from our legacy platform-primarily built on Talend and Jenkins-to our modern data stack powered by dbt Cloud and Snowflake. This is a hands-on engineering role that requires strong technical expertise, practical migration experience, and the ability to work closely with data teams and business stakeholders. … environments, we'd love to meet you. What you will be doing Contribute towards the end-to-end design and migration of data pipelines from Talend/Jenkins to dbt Cloud and Snowflake Spearhead the migration of legacy data pipelines from Jenkins/Talend to dbt/Snowflake, ensuring data integrity, reliability, and optimal performance Collaborate with data engineers, analysts … and business teams to translate existing business logic and workflows into modular dbt models Identify, document, and refactor legacy processes for performance, maintainability, and scalability Implement CI/CD best practices for data transformation using dbt Cloud and Git-based workflows Monitor, troubleshoot, and optimize pipeline performance and ensure data integrity across data platform Contribute to defining and improving dataMore ❯
Perform exploratory data analyses to identify arising challenges and new opportunities including proposing potential solutions Build and maintain retail customer and commercial related feature tables in our warehouse using dbt Perform descriptive analysis using Looker visualisations and dashboards Skills Extensive experience with SQL Deep experience with handling data in Python Experience with (or willingness to learn!) dimensional modelling and dbtMore ❯
We are seeking a passionate DataOps Engineer who loves optimizing pipelines, automating workflows, and scaling cloud-based data infrastructure. Key Responsibilities: Design, build, and optimize data pipelines using Airflow, DBT, and Databricks. Monitor and improve pipeline performance to support real-time and batch processing. Manage and optimize AWS-based data infrastructure, including S3 and Lambda, as well as Snowflake. Implement … supporting high-velocity data/development teams and designing and maintaining data infrastructure, pipelines, and automation frameworks. You should also have experience streamlining data workflows using tools like Airflow, DBT, Databricks, and Snowflake while maintaining data integrity, security, and performance. Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent work experience. Minimum of 3 years More ❯
questions. Excellent communicator-whether with dashboards, written summaries, or Loom videos. Strong product and business intuition-you know what matters and what doesn't. Nice to have: experience with dbt, dagster, Metabase, or Snowflake. Our Tech Stack We work with a modern, scalable data stack: Data & Infrastructure : Snowflake, S3, OpenSearch, AWS Lambda & Batch, Dagster, dbt Analytics & Experimentation : Amplitude, Appsflyer, Metabase … Orchestration & Modeling : Dagster for pipelines, dbt for transformations, and a strong experimentation layer across tools. You'll help shape insights and build models in a fast-moving, data-rich environment. Perks & Benefits 29 days PTO in addition to UK Bank Holidays Opportunity to work from abroad for up to 30 consecutive days a year Canary Wharf office Gym membership MacBook More ❯
or alternatively, give me a call on (phone number removed). Keywords: Data Engineering, Data Engineer, Snowflake, ETL, ELT, ADF, Data Factory, Synapse Analytics, SSIS, Migration, Pipeline, Python, Spark, DBT, Snowflake, Azure, SQL, Leeds More ❯
to grow top line revenues and guide commercial initiatives from our data. You'll own the analysis of the end-to-end customer journey, using our data stack ( BigQuery, dbt, Hex ) to create data models, data products, metrics and find insights that fuel our growth. You'll work closely with other engineers, marketers, product teams, and commercial teams to launch … and making recommendations from data insights to move the business forward. Excellent communication skills and experience collaborating with technical and non-technical teams. Bonus points Hands-on experience with dbt . Programming experience (Javascript, Python) or experience using data engineering tools Background in a consumer marketplace or another fast-growth startup. Experience helping to build a company's data or More ❯
Azure using Synapse, Data Factory, Data Lake and SQL. Role & Responsibilities Building end to end data pipelines in Azure (Data Factory and SQL) Building workflows in SQL, Spark and DBTData and dimensional modelling Skills & Qualifications Azure Data factory, Synapse and SSIS Python/Park/PySpark Ideally Snowflake and DBTMore ❯
with architects, analysts, and business stakeholders to unlock insights and enable innovation. What You'll Be Doing: Design and build robust, automated data pipelines using Azure Data Factory, Synapse, dbt, and Databricks. Integrate data from enterprise systems (e.g. Dynamics, iTrent, Unit4) into a unified data platform. Cleanse, transform, and model data to support BI tools (e.g., Power BI) and AI More ❯
with architects, analysts, and business stakeholders to unlock insights and enable innovation. What You'll Be Doing: Design and build robust, automated data pipelines using Azure Data Factory, Synapse, dbt, and Databricks. Integrate data from enterprise systems (e.g. Dynamics, iTrent, Unit4) into a unified data platform. Cleanse, transform, and model data to support BI tools (e.g., Power BI) and AI More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with architects, analysts, and business stakeholders to unlock insights and enable innovation. What You'll Be Doing: Design and build robust, automated data pipelines using Azure Data Factory, Synapse, dbt, and Databricks. Integrate data from enterprise systems (e.g. Dynamics, iTrent, Unit4) into a unified data platform. Cleanse, transform, and model data to support BI tools (e.g., Power BI) and AI More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with architects, analysts, and business stakeholders to unlock insights and enable innovation. What You'll Be Doing: Design and build robust, automated data pipelines using Azure Data Factory, Synapse, dbt, and Databricks. Integrate data from enterprise systems (e.g. Dynamics, iTrent, Unit4) into a unified data platform. Cleanse, transform, and model data to support BI tools (e.g., Power BI) and AI More ❯
with architects, analysts, and business stakeholders to unlock insights and enable innovation. What You'll Be Doing: Design and build robust, automated data pipelines using Azure Data Factory, Synapse, dbt, and Databricks. Integrate data from enterprise systems (e.g. Dynamics, iTrent, Unit4) into a unified data platform. Cleanse, transform, and model data to support BI tools (e.g., Power BI) and AI More ❯
South West London, London, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with architects, analysts, and business stakeholders to unlock insights and enable innovation. What You'll Be Doing: Design and build robust, automated data pipelines using Azure Data Factory, Synapse, dbt, and Databricks. Integrate data from enterprise systems (e.g. Dynamics, iTrent, Unit4) into a unified data platform. Cleanse, transform, and model data to support BI tools (e.g., Power BI) and AI More ❯
across both engineering and analytics, and is excited about building internal tools that directly improve product and customer experiences. You'll be working with a mature stack (Python, BigQuery, dbt, FastAPI, Metabase), and your day-to-day will include both writing production-level code and making data actually useful for decision-makers. Main responsibilities: Build, maintain, and optimize data pipelines … using Python and dbt Own and evolve the backend codebase (FastAPI, Docker) Ensure pipeline reliability, code quality, and proper testing/documentation Maintain and extend data models and the BI layer (Metabase) Collaborate closely with product, data science, and leadership on strategic data tools Design and deliver internal tools - potentially leveraging LLMs and OpenAI APIs Write clean, production-grade code … with version control (GitLab) Experience Required: 5+ years of Python , including writing production-level APIs Strong SQL and DBT for data transformation and modeling Experience with modern data stack components: BigQuery, GCS, Docker, FastAPI Solid understanding of data warehousing principles Proven ability to work cross-functionally with both technical and non-technical stakeholders Comfortable maintaining and optimizing BI dashboards ( Metabase More ❯
take the initiative, and identify creative solutions to deliver outcomes in the face of obstacles. Knowledge of common data science tools around SQL-based data warehousing (eg. Snowflake, Databricks, DBT), BI tools (eg. Tableau, Looker), workflow orchestration, and ML Ops. Excellent spoken and written English skills. Fluency with scripting in Python. Ability to work effectively across time zones. Teammates will More ❯
reliability, availability and scalability of all data systems Requirements Significant experience as a Data Science Engineer (or similar senior data role) Expertise in ETL tooling and pipeline development (e.g. dbt, Metabase) Proficiency in Python or R for data modelling and analysis Experience working with cloud platforms (AWS, GCP or Azure) Track record of deploying and maintaining reliable data systems at More ❯
and statistical techniques to healthcare data. Ability to manage a technical product lifecycle, including through end-user feedback and testing Desirable Experience in R and in Javascript Proficiency using DBT +/- Snowflake +/- Azure Personal Behavoirs Essential Proven experience in the ability to interact with colleagues at all levels both clinical and non-clinical within healthcare Ability to More ❯
a team of Data Engineers Experience with Data modelling, Data warehousing, and building ETL pipelines Experience with AWS (S3, EKS, EC2, RDS) or similar cloud services, Snowflake, Fivetran, Airbyte, dbt, Docker, Argo Experience in SQL, Python, and Terraform Experience with building Data pipelines and applications to stream and process datasets Robust understanding of DevOps principles is required Experience managing cloud More ❯
complex operational businesses Experience using LLMs or AI tools to structure and extract meaning from unstructured data Experience automating workflows and deploying model pipelines (e.g. MLFlow, GCP Vertex, Airflow, dbt or similar) Exposure to business planning, pricing, or commercial decision-making Familiarity with geospatial data Experience in fast-scaling startups or operational teams We're flexible on experience - if you More ❯