ability (Vue, React or Angular good but not necessary) Agile The following is DESIRABLE, not essential: AWS or GCP Buy-side Data tools such as Glue, Athena, Airflow, Ignite, DBT, Arrow, Iceberg, Dremio Fixed Income performance, risk or attribution TypeScript and Node Role: Python Developer (Software Engineer Programmer Developer Python Fixed Income JavaScript Node Fixed Income Credit Rates Bonds ABS … to be in the office 1-2 times a week. The tech environment is very new and will soon likely include exposure to the following: Glue, Athena, Airflow, Ignite, DBT, Arrow, Iceberg, Dremio This is an environment that has been described as the only corporate environment with a start-up/fintech attitude towards technology. Hours are 9-5. More ❯
and business value, while also promoting engineering best practices. Key responsibilities will include: - Designing and implementing scalable data warehouse solutions using Snowflake - Building efficient ELT/ETL pipelines using DBT and other modern tooling - Writing and optimising complex SQL queries for large datasets - Applying software engineering principles to data systems, including version control, testing, monitoring and CI/CD - Working … optimal solutions - Strong SQL skills with experience across relational and non-relational databases - Proficiency in a modern programming language such as Python, Java or Scala - Hands-on experience using DBT for pipeline development and transformation - Familiarity with cloud platforms such as AWS, Azure or GCP - Knowledge of orchestration tooling (e.g., Airflow, Dagster, Azure Data Factory, Fivetran) Desirable: - Experience deploying AI More ❯
Senior Data Engineer - AWS Leeds/Hybrid, c. 2x per week Salary - Competitve/Negotiable The Role Joining one of the best tech consultancies in the North. The Senior Data Engineer role focuses on the production of scalable and robust More ❯
Liverpool, Lancashire, United Kingdom Hybrid / WFH Options
Intuita - Vacancies
data engineering capabilities. Looking at our current pipeline of work, we can also consider those with an Analytics Engineering lean, with experience within BigQuery (GCP) with data modelling in DBT and mobile/telecoms industry experience would be beneficial. A bit about YOU! As much as we just love working with great, fun people, there are some obvious required Skills … Architecture, and working with data formats such as JSON, CSV, and Parquet. • Strong understanding of IT concepts, including security, IAM, Key Vault, and networking. • Exposure to Apache Airflow and DBT is a bonus. • Familiarity with agile principles and practices. • Experience with Azure DevOps pipelines. The "Nice to Haves": • Certification in Azure or related technologies. • Experience with other cloud platforms (e.g. More ❯
and scientists working across Africa, Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers More ❯
About The Role Hippo is recruiting for a Principal Data Engineer to join our Hippo Herd. Principal Data Engineers work in multi-disciplinary teams that build, support & maintain User-Centred digital solutions that offer real value and work for everyone. More ❯
About The Role Hippo is recruiting for a Principal Data Engineer to join our Hippo Herd. Principal Data Engineers work in multi-disciplinary teams that build, support & maintain User-Centred digital solutions that offer real value and work for everyone. More ❯
About The Role Hippo is recruiting for a Principal Data Engineer to join our Hippo Herd. Principal Data Engineers work in multi-disciplinary teams that build, support & maintain User-Centred digital solutions that offer real value and work for everyone. More ❯
About The Role Hippo is recruiting for a Principal Data Engineer to join our Hippo Herd. Principal Data Engineers work in multi-disciplinary teams that build, support & maintain User-Centred digital solutions that offer real value and work for everyone. More ❯
About The Role Hippo is recruiting for a Principal Data Engineer to join our Hippo Herd. Principal Data Engineers work in multi-disciplinary teams that build, support & maintain User-Centred digital solutions that offer real value and work for everyone. More ❯
experience building production data pipelines Advanced Python skills (NumPy, Pandas, SQL Alchemy) and expert-level SQL across multiple database platforms Hands-on experience with modern data stack tools including dbt, Airflow, and cloud data warehouses (Snowflake, BigQuery, Redshift) Strong understanding of data modelling, schema design, and building maintainable ELT/ETL pipelines Experience with cloud platforms (AWS, Azure, GCP) and More ❯
Python Data Engineer AI FinTech (DBT, SQL) - London/Hybrid Were hiring on behalf of a rapidly growing AI FinTech who are building the next-generation data-driven solutions. This is a brilliant opportunity for a hands-on Python Data Engineer with 2+ years' experience to join a modern, low-firefighting environment focused on clever, high-quality development - not legacy … and transformations. Your work will influence how real-time financial data is captured, transformed, and utilised. Experience: Python (2+ years experience) : Strong scripting and implementation skills Confident with Pandas DBT : Regular use of DBT in current role (not just setup experience) Creating reusable DBT modules SQL : Comfortable writing, optimizing, and deploying SQL through DBT models Nice-to-Haves: Snowflake experience … side projects, contributions, or personal innovations) Why Join? Greenfield Development: No tangled legacy mess - help shape a clean, smart system from the ground up Modern Stack: Snowflake, DBT, Python, Airflow - no tech debt weighing you down Low-Firefighting Environment: Spend your time building, not fixing Quality Over Volume: Small but impactful data - live, intelligent, and real-time More ❯
The Role We are looking for a Senior Data Engineer to join our growing data team and play a key role in building scalable, modern data products using Snowflake, dbt Cloud, and the DataVault framework. This is an exciting opportunity for someone with a strong foundation in cloud-based data engineering to design and develop end-to-end data pipelines … a strategic asset, we'd love to hear from you. What you will be doing Designing, building, and maintaining robust, scalable data pipelines and data products on Snowflake using dbt Cloud Applying DataVault 2.0 principles and patterns to build a flexible, extensible data model that supports business agility and traceability Collaborating closely with data analysts, data architects, and fellow engineers … and delivery goals Proactively identifying, diagnosing, and resolving performance and data quality issues across the data stack Enforcing best practices for data transformation, documentation, testing, and version control using dbt Supporting the evolution of our modern data stack and mentoring junior team members as needed Implement CI/CD best practices for data transformation using dbt Cloud and Git-based More ❯
Engineer (Enterprise DataWarehouse Developer) Description: As a Data Engineer, you'll design and maintain data scrapers and data pipelines, design & optimize analytics & relational databases, and build analytics models using DBT and bespoke aggregation engines. You'll work closely with business stakeholders, other BI Developers and DataOps as well as System engineers to support both data and application integrations using bespoke … processes Strong experience in data ingestion, transformation & orchestration technology (ETL tools such as Informatica, Datastage, SSIS, etc ) or open source Meltano, Airbyte, and Airflow Proven experience with DBT (databuildtool) Proficiency with business intelligence tools (Power BI, Tableau, SAP BI, or similar). Integration & Programming Hands-on experience with API development and integration (REST/SOAP) Proficiency in at More ❯
understanding of data engineering principles and best practices, including data modeling, observable ETL/ELT processes, data warehousing, and data governance. - Proficiency in data manipulation languages (e.g., SQL/DBT) and programming languages relevant to data engineering (e.g., Python). - Experience with a variety of data processing frameworks and technologies, including cloud-based data services. Software Engineering Practices: - Experience with … Tech Stack Cloud Data Warehouse - Snowflake AWS Data Solutions - Kinesis, SNS, SQS, S3, ECS, Lambda Data Governance & Quality - Collate & Monte Carlo Infrastructure as Code - Terraform Data Integration & Transformation - Python, DBT, Fivetran, Airflow CI/CD - Github Actions/Jenkins Nice to Have Experience Understanding of various data architecture paradigms (e.g., Data Lakehouse, Data Warehouse, Data Mesh) and their applicability to More ❯
Tech Stack: Cloud Data Warehouse - Snowflake AWS Data Solutions - Kinesis, SNS, SQS, S3, ECS, Lambda Data Governance & Quality - Collate & Monte Carlo Infrastructure as Code - Terraform Data Integration & Transformation - Python, DBT, Fivetran, Airflow CI/CD - Github Actions/Jenkins Business Intelligence - Looker Skills & Attributes We'd Like To See: Extensive experience in data engineering, including designing and maintaining robust dataMore ❯
data models. Experience with Interface/API data modelling. Experience with CI/CD GITHUB Actions (or similar) Knowledge of Snowflake/SQL Knowledge of Apache Airflow Knowledge of DBT Familiarity with Atlan for data catalog and metadata management Understanding of iceberg tables Who we are: We’re a business with a global reach that empowers local teams, and we More ❯
The team you'll be working with: We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in the full data modeling lifecycle, including designing, implementing, and More ❯
The team you'll be working with: We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in the full data modeling lifecycle, including designing, implementing, and More ❯
insight. Your role is to improve their interaction with these tools, whether they are internally or externally developed. Some examples of this type of work: Improving our in-house dbt CLI wrapper to make it more user friendly and optimise runtimes Monitor tooling interaction with tools like Sentry or Datadog to identify areas for improvement Developing our internal BI tooling … experience; we have plenty of incredible developers at Octopus Energy who are willing to teach if you're willing to learn! Required experience: Python Git Nice to have: SQL dbt Github CircleCI Airflow Kubernetes Terraform A Cloud warehouse provider e.g. Databricks, GCP, Snowflake AWS We aren't necessarily looking for someone who is "10-out-of-10" in all these More ❯
reconciliation, and integration verification activities. Core skills and experience: Proven experience designing scalable data architectures in cloud and hybrid environments. Expertise in data modelling, SQL, and platforms like Snowflake, dbt, Power BI, and Databricks. Fluency in Python and knowledge of multiple cloud providers (AWS, Azure, GCP). Understanding of security principles including role-based access control. Experience with legacy-to More ❯
top of our list Google Cloud Storage Google Data Transfer Service Google Dataflow (Apache Beam) Google PubSub Google CloudRun BigQuery or any RDBMS Python Debezium/Kafka dbt (DataBuildtool) Interview process Interviewing is a two way process and we want you to have the time and opportunity to get to know us, as much as we are getting More ❯
data platform evolution Has experience (or strong interest) in building real-time or event-driven architectures ️ Modern Data Stack Includes: Python , SQL Snowflake , Postgres AWS (S3, ECS, Terraform) Airflow , dbt , Docker Apache Spark , Iceberg What they're looking for: Solid experience as a Senior/Lead/Principal Data Engineer, ideally with some line management or mentoring Proven ability to More ❯
. Proficiency with Docker, Linux, and bash. Ability to document code, architectures, and experiments. Preferred Qualifications Experience with databases and data warehousing (Hive, Iceberg). Data transformation skills (SQL, DBT). Experience with orchestration platforms (Airflow, Argo). Knowledge of data catalogs, metadata management, vector databases, relational/object databases. Experience with Kubernetes. Understanding of computational geometry (meshes, boundary representations More ❯
with containerization and CI/CD tools (e.g., Docker, GitHub Actions). Knowledge of networking and cloud infrastructure (e.g., AWS, Azure). Experience with modern data processing frameworks (e.g., dbt, Apache Airflow, Spark, or similar). Requirements A strong focus on system observability and data quality. Emphasis on rapid scalability of solutions ( consider market ramp up when entering a new More ❯