Data Engineer

🚀 Are you a Data Engineer (5–7+ years) who enjoys owning production-grade pipelines end-to-end, optimising performance, and working with modern Python tooling on time-series datasets?

I’m supporting a London-based fintech in their search for a hands-on Data Engineer to help build and improve the data infrastructure powering a unified data + analytics API for financial markets participants.

You’ll sit in a small engineering/analytics team and take ownership of pipelines end-to-end — from onboarding new datasets through to reliability, monitoring and data quality in production. Finance experience is a bonus, but not essential.

Note: they use cloud infrastructure but deploy services on their own servers, so a strong production/ops mindset is important.

In this role, you’ll:

  • Build, streamline and improve ETL/data pipelines (prototype → production)
  • Ingest and normalise high-velocity, time-series datasets from multiple external sources
  • Work heavily in Python with a modern columnar stack (Polars + Parquet/Arrow/PyArrow; DuckDB is a nice-to-have)
  • Orchestrate workflows and improve reliability (they use Temporal — similar orchestration experience is fine)
  • Own production readiness: validations, automated checks, backfills/reruns, monitoring/alerting, incident/RCA mindset
  • Work independently and help drive delivery forward — including providing practical technical guidance to shape solutions

What’s in it for you?

  • Modern Python stack – Polars + Parquet/Arrow (DuckDB a plus)
  • Ownership & impact – high visibility; you’ll influence performance and reliability directly
  • Market/time-series exposure – complex financial datasets; learn the domain as you go
  • Hybrid London – London preferred, 2–3 days in the office
  • Start ASAP – interviewing now

What my client is looking for:

  • 5–7+ years hands-on data engineering experience
  • Strong Python + SQL fundamentals (ETL, pipelines, data modelling, performance)
  • Hands-on experience with Polars and Parquet/Arrow/PyArrow
  • Proven ability to operate pipelines in production (monitoring, backfills, data quality, incidents)
  • Able to work independently and drive things forward without heavy oversight
  • Interest in financial data (experience helpful but not required)

Nice to have:

  • DuckDB experience
  • Time-series experience (market data, telemetry, pricing, events)
  • Streaming exposure (Kafka/Event Hubs/Kinesis)
  • Experience with Temporal (or similar orchestrators like Airflow/Dagster/Prefect)
  • Any exposure to AI agents / automation tooling

👉 Apply now!

Job Details

Company
Intellect Group
Location
City of London, London, United Kingdom
Posted