Head of Data

About company

PrismFP Analytics builds quantitative analytics products for institutional investors from offices in London, New York, Copenhagen and Tallinn. We combine deep practitioner knowledge of financial markets with cloud-scale data infrastructure to power proprietary derivatives analytics, portfolio construction, and risk tooling for some of the largest names in finance.

The role

This is a balanced role - roughly 50% people leadership, 50% hands-on engineering. You will own our data strategy and roadmap while staying close enough to implementation to unblock your team, make architecture decisions with conviction, and ship code yourself when it matters.

Team

You will manage a team of 3 data professionals today, with a hire planned this year.

Reporting line

This is a high-profile role that reports directly to the CEO and includes hiring, performance review and team development responsibilities.

What you’ll do

  • Set and evolve the data strategy and roadmap in line with company priorities, balancing quick wins with scalable foundations.
  • Lead the data team (currently 3 people), including hiring (with a plan to add 1 more this year), performance reviews, coaching, and day-to-day delivery management.
  • Stay hands-on: design, build, and maintain data pipelines, datasets, and internal tooling that support quantitative research and product development.
  • Establish and own data quality, availability, and coverage metrics (KPIs), along with monitoring and alerting to keep data reliable.
  • Partner closely with quantitative researchers, software engineers, product owners, and the business to deliver end-to-end data capabilities.
  • Improve data engineering standards and practices (testing, code quality, documentation, and operational excellence) to help the team scale sustainably.

What success looks like (first 6 months)

  • Clear quality/availability/coverage KPIs are defined, visible, and used to guide priorities.
  • A clean taxonomy and stable interfaces/contracts exist for core datasets (definitions, naming, ownership, consumption patterns).
  • Incidents and manual fixes drop through better validation, monitoring/alerts, and clear ownership.
  • Strong data engineering practices are the default: testing, code review, documentation, and operational ownership.
  • A sustainable team cadence is in place: prioritisation, planning, and support expectations are clear.
  • Cost and performance are managed intentionally, with visibility into drivers and explicit trade-offs—without hurting reliability.

Who you are and what you have

  • You have 5–10+ years of experience in data engineering, with at least 2 years leading or managing a team—guiding, motivating, and developing colleagues to deliver shared outcomes.
  • You have experience working with financial datasets, ideally including market data feeds, derivatives reference data, and time-series pricing or similar domains.
  • You can align data initiatives with overarching business strategy, translating business priorities into a clear data roadmap your team can execute.
  • You have deep experience designing, building, and operating production data pipelines, including owning reliability, monitoring, and failure recovery for orchestration systems such as Apache Airflow (or equivalent).
  • You are fluent in modern data and cloud ecosystems, with hands-on experience (or the ability to ramp quickly) in technologies such as Iceberg, Spark, PostgreSQL, and cloud-native infrastructure on AWS (or equivalent).
  • You write clean, readable, and testable Python code, applying sound abstractions, naming, and review discipline to build systems that scale beyond the first version.
  • You bring strong analytical and problem-solving skills and can communicate ideas clearly—both in writing (e.g. concise design documents) and in discussion with engineers, product owners, and business stakeholders.
  • You exercise pragmatic judgment: knowing when to build robust foundations and when a well-scoped, pragmatic solution is the right trade-off, and you can converge when many possible solutions exist.
  • You work well within a team and believe in open discussion, inclusion, and diversity.
  • You like to explore new approaches and technologies to solve problems but can move decisively from exploration to execution.
  • A high university degree in Computer Science, Mathematics, Engineering, Physics, or similar

Our approach and technology stack

  • Part hybrid work policy - min 4 days a week in office
  • Lean principles and Agile development practices
  • Continuous deployment across all microservices
  • Big Data ecosystem and SQL Databases (Apache Airflow, Iceberg, Spark/Thrift, PostgreSQL, Azure Hyperscale)
  • Python as a primary backend language
  • High-level frameworks (Flask, SQLAlchemy, Alembic, Pytest, Socket.IO)
  • Amazon Web Services and ecosystem (AWS core services, Lambda, RDS, EKS, S3, etc.)
  • Cloud-native technologies (Kubernetes, Helm, Docker)
  • Observability technologies (Prometheus, Jaeger, Loki, Grafana, Sentry)
  • Infrastructure automation and containerized CI/CD (GitLab, Terraform, Atlantis)
  • Any technologies you feel would help us move forward

Benefits

  • Competitive salary / discretionary bonus / ESOP
  • Work closely with financial market practitioners
  • Private medical insurance
  • Pension salary sacrifice and contribution match
  • 25 days annual leave plus bank holidays
  • Regular team events - joint dinners, drinks
  • Travel opportunities to Estonia

Job Details

Company
PrismFP
Location
London, England, United Kingdom
Hybrid / Remote Options
Posted