Senior Data Platform Engineer
An exciting opportunity has become available to be part of our fast-growing team and to be a key part of the Technology & Innovation function. This role provides an outstanding opportunity to join a growing global integrated marketing advisory and will provide an ambitious individual with valuable experience for progression and development.
The Company
mediasense is a global, independent advisor that brings the clarity, connection and confidence modern marketers need to fuel growth. We help marketers eliminate waste and maximize the impact of their most significant investments at scale. Our ambition is to define and own an entirely new category – recognized as the world’s most trusted, independent and impactful marketing advisor. We do this by elevating how clients operate, how agencies deliver, how platforms perform and how all parts of the ecosystem connect. Because progress doesn’t happen in isolation, it takes alignment, intelligence and trust. That’s what we enable. That’s what we stand for. That’s Unified Marketing Intelligence.
mediasense is designed around how we help marketers optimize and transform through best-in-class structure and governance (Organization), partners and platforms (Ecosystem), governance and controls (Assurance), and measurement and effectiveness (Science). We have the proven ability to effectively support large, complex organizations and deliver multiple projects simultaneously. As evidenced through our strong track record with clients, we pride ourselves on the consistent high quality of service delivery and the ability to accelerate transformation and growth.
mediasense has over 200 employees across London, New York, Singapore and New Delhi.
For more information, visit www.media-sense.com
What We Offer
- Hybrid working
- Initial 28 days holiday (before bank holidays) with an accrual of 4 days over first 4 years of service
- Day off for your birthday + 10 days annual leave purchase per year
- Length of service awards
- Work from any location in the world up to 4 weeks per year
- Bonuses: Discretionary company bonus scheme & new business bonus, employee referral bonus
- Pension & Group life insurance
- Private healthcare, enhanced parental, Employee assistance program,
- Annual season ticket loan, Cycle to work scheme + Tech & voucher schemes
- Eye test & contribution towards glasses for VDU
- Charity Day plus fund raising events for charity
- Learning & development opportunities
- Frequent events- such as summer, winter & bi-weekly socials
- Free fruit & snacks + building linked benefits such as being dog friendly, access to gyms & complementary gifts, classes or discounts
The Role
This role is for Data Platform Engineer, based in London. This role sits within our Data & Insights team, which provides the analytical infrastructure that underpins our advisory work. The team is building a modern data lakehouse on Snowflake, migrating legacy Alteryx and Tableau workflows into automated, versioned pipelines, and packaging outputs as software products with formal release cycles. We treat data engineering with the same discipline as product engineering: version-controlled, tested, continuously delivered, and documented.
The role itself will involve a range of activities including:
- Release lifecycle ownership (end-to-end): treat data products like software products by defining a clear release strategy (versioning, promotion through environments, rollback), and managing database/schema changes so delivery is predictable, low-risk, and repeatable
- CI/CD, testing & Git workflow: build and maintain automated test and deployment pipelines; establish branching and PR review discipline; write unit, integration, and data quality tests as a matter of course (not an afterthought)
- Operational health of the data platform: own monitoring, logging, alerting, and incident response for data pipelines and platform services; administer and optimise Snowflake (compute, performance, cost management, and access controls)
- Legacy maintenance & migration (supporting responsibility): maintain and gradually migrate legacy pipelines and applications across Alteryx, AWS, and Azure into the core lakehouse platform
- Infrastructure, security & data governance: contribute to reproducible, auditable environment management (Terraform or equivalent); help define and implement access controls, secrets handling, and data governance guardrails across CI/CD, Snowflake, and cloud services so security and compliance are built-in by default
- Interfaces, standards & collaboration: work with analysts and data scientists to define clean interfaces and data contracts; help establish and evolve shared engineering standards and ways of working so delivery is repeatable, low-friction, and resilient
- Documentation & runbooks: maintain clear technical documentation, operational runbooks, and post-incident learnings so knowledge is transferable and systems don’t rely on tribal memory
The Candidate
This is a hands-on role for someone who wants real ownership in a lean, fast-moving team. You bring software engineering discipline as a default behaviour: you default to automation over manual process, you question un-versioned or untested code, and you treat CI/CD, testing, and release management as core practice applied to data products.
The ideal candidate will have the following:
Required (core)
- General project management - ability to manage timelines, communicate to stakeholders, and deliver to scope
- CI/CD & release management - practical experience owning automated build/test/deploy pipelines (GitHub Actions or equivalent), environment promotion, rollbacks, and change control
- Python - fluent for automation and pipeline/tooling development; writes clear, maintainable, testable code
- Git workflow & code review discipline - comfortable establishing branching strategy, pull request practices, and merge standards so a small team can ship safely
- Testing & quality mindset - writes tests as a default (unit/integration/data quality) and understands how to make failures visible and actionable (e.g., pytest, dbt tests, Great Expectations or equivalent)
- Debugging, troubleshooting & maintainable architecture - comfortable finding root causes in production issues; has opinions on how to structure a codebase that three people can work in without stepping on each other; values documentation and runbooks
Required (domain)
- Snowflake - hands-on with data modelling, SQL optimisation, compute management, and access controls
- dbt - production experience building and deploying models, tests, macros, and documentation
Desirable
- AWS and/or Azure - comfortable navigating cloud environments to maintain and progressively migrate legacy workloads
- Orchestration - Airflow, Prefect, Dagster, or equivalent
- Infrastructure-as-code - Terraform or similar
- Data governance & observability - lineage, data quality frameworks, access policy, and monitoring/alerting practices
Strong academic background in computer science, engineering, mathematics, or a related discipline preferred. More important than credentials: direct ownership, clear communication about what you know and don’t, and code you’re proud to have the next person read.
Every application is reviewed by a human on our team (not AI), so it may take us a bit of time to get through them. Because of the large number of applications we receive, we’re not always able to reply to everyone individually especially via messages but we truly appreciate your interest.
Please note that can only consider candidates who already have the right to work in the UK and do not require, now or in the future, visa sponsorship