help them further grow their already exciting business. Within this role, you will be responsible for Maintaining, supporting and expanding existing data pipelines using DBT, Snowflake and S3. You will also be tasked with implementing standardised data ingress/egress pipelines coupled with onboarding new, disparate data sets, sourced from More ❯
variety of databases Working knowledge of one or more of the cloud platforms (AWS, Azure, GCP) Experience building ETL/ELT pipelines specifically using DBT for structured and semi-structured datasets Any orchestration toolings such as Airflow, Dagster, Azure Data Factory, Fivetran etc It will be nice to have: Software More ❯
use. Your typical day will look like this: Connect with team at standup to catchup on the latest. Builddata pipelines with Spark or DBT on Starburst Use SQL to transform data into meaningful insights Build and deploy infrastructure with Terraform Implement DDL, DML with Iceberg Do code reviews for More ❯
. Experience deploying and maintaining cloud infrastructure (e.g. AWS, GCP, or Azure). Familiarity with data modeling and warehousing concepts, and dimensional modeling techniques. dbt knowledge is preferable. Comfortable working with CI/CD tools, version control, and containers (e.g Git, Jenkins, Docker). Understanding of data governance, security best More ❯
data solutions using API and microservice-based architecture. Deep understanding of ETL/ELT architecture, streaming, and event-driven processing; familiarity with tools like dbt, Airflow, Kafka, or equivalents. Familiarity with mid-sized firm tech stacks, especially in financial services, including systems such as NetSuite, Salesforce, Addepar, Experience with Atlassian More ❯
of experience in a Data Engineering role. Strong programming skills in Python , Scala , or Java . Solid experience with ETL tools (e.g., Apache Airflow, dbt, Talend). Proficiency with SQL and relational/non-relational databases (e.g., PostgreSQL, BigQuery, Snowflake, MongoDB). Experience working with cloud environments and data services More ❯
platform roles, including at least 3 years in a leadership position. Deep hands-on expertise in modern data architecture, pipelines, and tooling (e.g., Airflow, DBT, Kafka, Spark, Python, SQL). Strong understanding of cloud infrastructure (AWS, GCP, or Azure) and scalable data systems. Familiarity with analytics and ML workflows, including More ❯
working autonomously. It would be a real bonus, but not a requirement if: You've worked in a start-up environment. You've got DBT experience. You've familiarity with MLOps principles and practices and their application in a production setting. Interview Process: You'll have a 20-minute conversation More ❯
R. Working knowledge of message queuing and stream processing. Experience with Apache Spark or Similar Technologies. Experience with Agile and Scrum Technologies. Familiarity with dbt and Airflow is an advantage. Experience working in a start-up or scale up environment. Experience working in the fields of financial technology, traditional financial More ❯
R. Working knowledge of message queuing and stream processing. Experience with Apache Spark or Similar Technologies. Experience with Agile and Scrum Technologies. Familiarity with dbt and Airflow is an advantage. Experience working in a start-up or scale up environment. Experience working in the fields of financial technology, traditional financial More ❯
solutions, while also driving innovation in analytics engineering practices. Key Responsibilities Technical Leadership : Design and implement robust, scalable data pipelines using tools such as dbt and Airflow. Stakeholder Collaboration : Work closely with stakeholders to understand business needs and deliver tailored, data-driven solutions. Data Transformation : Convert raw data into clean More ❯
Redshift, BigQuery, or Snowflake, and modern transformation/query engines like Spark, Flink, Trino. Familiarity with workflow management tools (e.g., Airflow) and/or dbt for transformations. Comprehensive understanding of modern data platforms, including data governance and observability. Experience with cloud platforms (AWS, GCP, Azure). Self-starter capable of More ❯
languages commonly used for data work (e.g., Python, Java, Scala) Deep understanding of ETL/ELT tools and workflow orchestration platforms (e.g., Airflow, Fivetran, dbt) Proficiency with SQL and solid grounding in data modeling concepts Familiarity with cloud services and architectures (AWS, GCP, or Azure) Proven experience managing or mentoring More ❯
driving innovation in analytics engineering practices. What You'll Own Technical Leadership: Lead the design and implementation of robust data pipelines using tools like dbt and Airflow. Client Collaboration: Build strong relationships with stakeholders to identify challenges and deliver tailored solutions. Data Excellence: Transform raw data into actionable datasets for More ❯
preferred). Experience with CI/CD pipelines and version control. Proficiency in data visualisation tools (e.g. Tableau, PowerBI). Exposure to tools like DBT, Apache Airflow, Docker. Experience working with large-scale datasets (terabyte-level or higher). Excellent problem-solving capabilities. Strong communication and collaboration skills. Proficiency in More ❯
either AWS or GCP. Experience with Terraform to define and manage cloud infrastructure through code. Desirables: Experience in SQL-based transformation workflows, particularly using DBT in BigQuery Experience with containerisation technologies (Docker, Kubernetes). Familiarity with streaming data ingestion technologies (Kafka, Debezium). Exposure to Data management and Linux admisntration More ❯
and technologies (note we do not expect applicants to have prior experience of all them): Google Cloud Platform for all of our analytics infrastructure dbt and BigQuery SQL for our data modelling and warehousing Python for data science Go to write our application code AWS for most of our backend More ❯
data best practices across teams Champion data quality, governance, and documentation Key Requirements: Strong experience with Python, SQL, and modern ETL tools (e.g., Airflow, dbt) Solid grasp of cloud platforms (AWS/GCP/Azure) and data warehouses (e.g., BigQuery, Snowflake) Familiarity with streaming technologies (Kafka, Kinesis, etc.) Passion for More ❯
data best practices across teams Champion data quality, governance, and documentation Key Requirements: Strong experience with Python, SQL, and modern ETL tools (e.g., Airflow, dbt) Solid grasp of cloud platforms (AWS/GCP/Azure) and data warehouses (e.g., BigQuery, Snowflake) Familiarity with streaming technologies (Kafka, Kinesis, etc.) Passion for More ❯
with IAC( Terraform )and CI/CD ( Jenkins ). Experience in developing and deploying AI/ML pipelines . Familiarity with Airflow , Redshift , and dbt . What you will do: Architect, build, and optimise robust ETL processes for efficient data extraction, transformation, and loading. Develop an automation model and sophisticated More ❯
Preferred Knowledge/Experience Experience with CI/CD and Infrastructure-as-Code (e.g., Terraform, CloudFormation). Familiarity with modern data stack components (e.g., dbt, Snowflake, Airbyte). Experience working in an Agile/Scrum environment. Knowledge of Python or Java/Scala for data engineering. Experience with version control More ❯
Stack Python and Scala Starburst and Athena Kafka and Kinesis DataHub ML Flow and Airflow Docker and Terraform Kafka, Spark, Kafka Streams and KSQL DBT AWS, S3, Iceberg, Parquet, Glue and EMR for our Data Lake Elasticsearch and DynamoDB More information: Enjoy fantastic perks like private healthcare & dental insurance, a More ❯
Nottinghamshire, United Kingdom Hybrid / WFH Options
Akkodis
Data Engineer (Python, Data Modelling, ETL/ELT, Apache Airflow, DBT, AWS) Enterprise-scale tech firm Up to £70,000 plus benefits - FULLY REMOTE UK Are you a Data Engineering enthusiast who thrives from designing and implementing robust ETL processes, highly scalabale data structures and data pipelines within a truly More ❯
Have hands-on experience with cloud infrastructure (GCP/AWS/Azure), infrastructure-as-code (Terraform), containerisation (Docker/k8s) and data pipelines (SQL, dbt, Airbyte) Love automation, process improvement and finding ways to help others work efficiently Are comfortable working autonomously and taking responsibility for the delivery of large More ❯
Snowflake/Databricks/Redshift/BigQuery, including performance tuning and optimisation. Understanding of best practices for designing scalable and efficient data models, leveraging dbt for transformations. Familiarity with CircleCI, Terraform or similar tools for deployment and infrastructure as code. In this role, you will be responsible for: Shipping and More ❯