experience with shell-scripting languages Working knowledge of orchestration tools, e.g. Apache Airflow Experience of ETL/ELT tooling – for example Pentaho, AWS Glue, DBT, airflow etc. GIT and experience in building CI/CD pipelines DBA Experience in AWS cloud environment managing AWS Aurora, and Amazon RDS (MySQL, Postgres more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Lorien
Delta Lake, and other enterprise scale data stores. Data Orchestration - Enterprise scale usage of technology such as Azure Data Factory, Apache Airflow, Logic Apps, DBT, SnapLogic, Spark or similar tools. Software Tooling - GIT/GitHub, CI/CD, deployment tools like Octopus, Terraform infrastructure as code and other DevOps practices. more »
Stratford-Upon-Avon, England, United Kingdom Hybrid / WFH Options
Impellam Group
Delta Lake, and other enterprise scale data stores. Data Orchestration - Enterprise scale usage of technology such as Azure Data Factory, Apache Airflow, Logic Apps, DBT, SnapLogic, Spark or similar tools. Software Tooling - GIT/GitHub, CI/CD, deployment tools like Octopus, Terraform infrastructure as code and other DevOps practices. more »
Financial services experience Experience designing and implementing data warehouses Experience developing and maintaining Apache Airflow DAGs to implement data pipelines Hands on experience using DBT to manage and implement data transformations A working understanding of Docker Experience working with big data and/or MPP (massively parallel processing) databases Experience more »
e.g., Python, Java, Scala). • Hands-on experience with ETL (extract, transform, load) processes and data integration tools and techniques such as Snowflake, Fivetran, dbt, etc. • Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Snowflake). • Proven ability to design and implement scalable and efficient data pipelines. more »
tools such as FiveTran or Stitch Experience using Business Intelligence tools such as PowerBI, Looker or Tableau Experience with data pipeline tools such as DBT, Airflow or Luigi are a plus! Experience using cloud environments e.g. Azure or AWS Understanding of the Agile delivery method Working Conditions: · Permanent, London Chiswick more »
Manchester Area, United Kingdom Hybrid / WFH Options
hackajob
Cloud products (Cloud SQL, BigQuery, RedShift, Snowflake, Apache Beam, Spark) or similar technologies. Proficiency in open-source data-stack tools such as Airflow, Airbyte, DBT, Kafka, etc. Familiarity with data visualization tools like PowerBI, Tableau, or Looker. Knowledge of Teradata, Mainframe, and/or Google Analytics is beneficial. Understanding of more »
Python/Javascript/C# Familiarity with statistical/machine learning/AI concepts and techniques Understanding of data pipeline/orchestration tools e.g. dbt, dataform Appreciation of GCP’s serverless technologies e.g. Cloud Run/Workflows Understanding of Google’s marketing stack, Google Analytics, Google Tag Manager, Google Ads more »
Manchester, North West, United Kingdom Hybrid / WFH Options
N Brown Group
Cloud products (Cloyd SQL, BigQuery, RedShift, Snowflake, Apache Beam, Spark) or similar products. Experience with open-source data-stack tools such as Airflow, Airbyte, DBT, Kafka etc. Awareness of data visualisation tools such as PowerBI, Tableau and/or Looker Knowledge of Teradata, Mainframe and/or Google Analytics is more »
working with relational databases, and programming experience in Python or Scala. Experience with Snowflake and its tooling (Snowpark, Snowpipe, etc.). Familiarity with Fivetran, DBT, TensorFlow, PyTorch, and other modern data stack components. Knowledge of data integration and ETL frameworks and tools. Understanding of DataOps, data mining, and data visualization more »
at scale. Technical Skills: Experience: 7+ years in data engineering with a focus on implementing large-scale lakehouses (Databricks, Snowflake, Synapse). Proficiency in DBT and PowerBI is advantageous. Cloud and Data Services Mastery: Comprehensive knowledge of Azure data services (Databricks, Synapse, ADF) and associated infrastructure (firewalls, storage, key vault more »
Python, and orchestration tools such as Autosys and Airflow. Familiarity with Amazon Redshift and PostgreSQL. Ability to troubleshoot and optimize data processes. Experience with DBT, Power BI and Tableau is a plus. more »
LookML Experience working with numerous AWS services A detailed understanding of CI/CD practices & tooling A research or mathematical background Experience working with dbt & dbt cloud Experience working with Data Orchestration tooling (e.g. Dagster, Prefect, etc) Experience working with Data Ingestion tooling (e.g. FiveTran, Keboola, etc) Experience working with more »
years of demonstrated commercial experience as a Data Engineer or similar role within large-scale environments dealing with large data sets. Expertise in SQL & dbt and ideally Kafka Significant Python coding skills Containerisation experience (Dockers, Kubernetes) Cloud Computing experience ( GCP/AWS/Azure ) Strong preference for a Snowflake background more »
and opportunities for leveraging the organisation’s data assets to create impact in a fast-moving environment. Our Tech Stack AWS, Python, Dagster, Airbyte, dbt, Redshift, Terraform, Github, Postgres, Tableau Skills and Experience We encourage you to remove education from your CV upon application as qualifications are not a driving more »
Sunderland, England, United Kingdom Hybrid / WFH Options
Client Server
with Python (or C#) and SQL You have experience with SQL databases (e.g. Amazon Redshift, PostgreSQL) You have experience with data tooling (e.g. Airflow, DBT, AWS Kinesis) You have strong analysis and problem solving skills You're collaborative with excellent communication skills What's in it for you: Competitive salary more »
pipelines and basic API implementation (Flask or FastAPI). Understanding of observability pipelines (e.g., Prometheus+Grafana, ELK stack). Proficiency in SQL and experience with DBT or equivalent frameworks. Desirable: Knowledge of scalable data processing systems like Spark. Familiarity with Machine Learning in Python. Strong skills in Continuous Integration/Continuous more »
pipelines and basic API implementation (Flask or FastAPI). Understanding of observability pipelines (e.g., Prometheus+Grafana, ELK stack). Proficiency in SQL and experience with DBT or equivalent frameworks. Desirable: Knowledge of scalable data processing systems like Spark. Familiarity with Machine Learning in Python. Strong skills in Continuous Integration/Continuous more »
Cardiff, Wales, United Kingdom Hybrid / WFH Options
Legal & General
positions are to focus on the retirements side of the Retail division and will build out new data pipelines utilising tools such as Synapse, DBT, Azure Devops and Snowflake. This role will see you responsible for designing, building, and implementing a variety of data solutions using modern ETL techniques and more »