Skills Required Proficiency in SQL for querying, transforming, and managing data within databases. Experience in developing and optimising ETL/ELT pipelines and using DBT for data transformation and modelling. Knowledge of data modelling techniques, including star and snowflake schemas, for efficient data analysis. Familiarity with cloud platforms such as More ❯
. Experience deploying and maintaining cloud infrastructure (e.g. AWS, GCP, or Azure). Familiarity with data modeling and warehousing concepts, and dimensional modeling techniques. dbt knowledge is preferable. Comfortable working with CI/CD tools, version control, and containers (e.g Git, Jenkins, Docker). Understanding of data governance, security best More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Duel
with Snowflake. You understand event-driven architectures and real-time data processing. You have experience implementing and maintaining scalable data pipelines using tools like dbt, Apache Airflow, or similar. You have no issue working with either structured or semi-structured data. You are comfortable working with data engineering and scripting More ❯
Stack: Python and Scala Starburst and Athena Kafka and Kinesis DataHub ML Flow and Airflow Docker and Terraform Kafka, Spark, Kafka Streams and KSQL DBT AWS, S3, Iceberg, Parquet, Glue and EMR for our Data Lake Elasticsearch and DynamoDB More information: Enjoy fantastic perks like private healthcare & dental insurance, a More ❯
Herndon, Virginia, United States Hybrid / WFH Options
Maxar Technologies Holdings Inc
data. Experience building data-streaming processes. Experience using PostGIS. Experience with any of the following: Apache-Hive, Trino, Presto, Starburst, OpenMetadata, Apache-SuperSet, Terraform, dbt, Tableau, Fivetran, Airflow. Experience implementing resilient, scalable, and supportable systems in AWS. Experience using a wide variety of open-source technologies and cloud services. Experience More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Yelp USA
technologies: Python, AWS Redshift, AWS Athena/Apache Presto, Big Data technologies (e.g S3, Hadoop, Hive, Spark, Flink, Kafka etc), NoSQL systems like Cassandra, DBT is nice to have. What you'll get: Full responsibility for projects from day one, a collaborative team, and a dynamic work environment. Competitive salary More ❯
R. Working knowledge of message queuing and stream processing. Experience with Apache Spark or Similar Technologies. Experience with Agile and Scrum Technologies. Familiarity with dbt and Airflow is an advantage. Experience working in a start-up or scale up environment. Experience working in the fields of financial technology, traditional financial More ❯
R. Working knowledge of message queuing and stream processing. Experience with Apache Spark or Similar Technologies. Experience with Agile and Scrum Technologies. Familiarity with dbt and Airflow is an advantage. Experience working in a start-up or scale up environment. Experience working in the fields of financial technology, traditional financial More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
as well as 30 Engineers in the businesses data arm. Requirements: 3+ Years data engineering experience Snowflake experience Proficiency across an AWS tech stack DBT Expertise Terraform Experience Nice to Have: Data Modelling Data Vault Apache Airflow Benefits: Up to 10% Bonus Up to 14% Pensions Contribution 29 Days Annual More ❯
e.g., Snowflake). Proficiency in SQL and at least one scripting language (e.g., Python, Bash). Familiarity with data pipeline orchestration tools (e.g., Airflow, dbt). Strong understanding of data quality principles and best practices. Excellent communication and collaboration skills. Working with AWS, Twilio Segment and Google Analytics. Hands-on More ❯
to other engineering teams in a Flask API; Our current technology stack primarily includes Linux, Docker, Kubernetes, Jenkins, Kafka, Python, Flask, Postgres, AWS Redshift, dbt, Google Bigquery, Prometheus, Grafana, Elastic Search, Kibana, Sisense. Your responsibilities As a member of the data team, your responsibilities will include contributions to: Developing and More ❯
either AWS or GCP. Experience with Terraform to define and manage cloud infrastructure through code. Desirables: Experience in SQL-based transformation workflows, particularly using DBT in BigQuery Experience with containerisation technologies (Docker, Kubernetes). Familiarity with streaming data ingestion technologies (Kafka, Debezium). Exposure to Data management and Linux admisntration More ❯
while collaborating with technical and non-technical stakeholders. What you'll do: Crafting robust Business Intelligence (BI) solutions using cutting-edge tools like Alteryx, DBT, Snowflake, Azure, Tableau, Power BI, and ThoughtSpot. Collaborating with stakeholders to gather business requirements, designing innovative solutions, and creating intuitive reports and dashboards. You'll More ❯
working autonomously. It would be a real bonus, but not a requirement if: You've worked in a start-up environment. You've got DBT experience. You've familiarity with MLOps principles and practices and their application in a production setting. Interview Process: You'll have a 20-minute conversation More ❯
Nottinghamshire, United Kingdom Hybrid / WFH Options
Akkodis
Data Engineer (Python, Data Modelling, ETL/ELT, Apache Airflow, DBT, AWS) Enterprise-scale tech firm Up to £70,000 plus benefits - FULLY REMOTE UK Are you a Data Engineering enthusiast who thrives from designing and implementing robust ETL processes, highly scalabale data structures and data pipelines within a truly More ❯
Snowflake/Databricks/Redshift/BigQuery, including performance tuning and optimisation. Understanding of best practices for designing scalable and efficient data models, leveraging dbt for transformations. Familiarity with CircleCI, Terraform or similar tools for deployment and infrastructure as code. In this role, you will be responsible for: Shipping and More ❯
San Diego, California, United States Hybrid / WFH Options
Avidity Biosciences
of experience (BA/BS); or a Master's degree with 8+ years of experience with modern data engineering using SQL & Python Mastery of dbt for modular, scalable, and testable data transformations Solid expertise in BI and visualization tools (e.g., Looker, Tableau, Mode) and their data modeling layers. Experience in More ❯
analysis Knowledge of dashboard design and data visualization best practices Experience with cloud-based data infrastructure (AWS) Familiarity with modern data stack tools (Airflow, dbt, etc.) Why This Role Matters Judge.me is at an inflection point. As the market leader in Shopify reviews, we've chosen to build our future More ❯
and coaching others to succeed. Have a strong background in building and managing data infrastructure at scale, with expertise in Python, SQL, BigQuery, AWS, dbt and Airflow. Have a strong background in data modelling and building scalable data pipelines. Are naturally curious and enthusiastic about experimenting with new tech to More ❯
Have hands-on experience with cloud infrastructure (GCP/AWS/Azure), infrastructure-as-code (Terraform), containerisation (Docker/k8s) and data pipelines (SQL, dbt, Airbyte) Love automation, process improvement and finding ways to help others work efficiently Are comfortable working autonomously and taking responsibility for the delivery of large More ❯
or management experience in a technical role, with responsibility for people and projects Hands-on expertise in Python and SQL; experience with Spark and DBT is a plus Familiarity with AWS services such as Lambda, S3, Redshift, and Glue; Databricks experience is advantageous Experience designing and implementing scalable data platforms More ❯
Experience with at least one BI/visualisation tool (e.g. Looker/Power BI/Tableau etc.) Knowledge of ETL/ELT tools (e.g. dbt/Fivetran etc.) An understanding of Data Governance principles, and the importance of maintaining data quality and providing accurate data and sound insights. An agile More ❯
Tools). Experience with one or more of the following is a plus: Kubernetes, Prometheus, Argo workflows, GitHub Actions, Elasticsearch/Opensearch, PostgreSQL, BigQuery, DBTdata pipelines, Fastly, Storybook, Contentful, Deno, Bun. Benefits We want to give you a great work environment; contribute back to both your personal and professional More ❯
and configuration. Experience working with Kubernetes (k8s). Knowledge of data and analytical products like Immuta, Apache Ranger, Collibra, Snowflake, PostgreSQL, Redshift, Hive, Iceberg, dbt, AWS Lambda, AWS Glue, and Power BI. Familiarity with cloud environments such as AWS. Knowledge of additional scripting languages or tools is a plus. Employment More ❯
with cloud data warehouses such as Redshift. Skills in modelling and querying data in SQL and Python. Experience with ETL/ELT tooling including DBT and Airflow. Experience with CI/CD and infrastructure-as-code, ideally within AWS cloud. Also desirable - familiarity with AWS' data tools such as EMR More ❯