Data Engineer
About Us
Finalto is a global leader in liquidity provision and trading technology solutions, serving institutional and B2B clients across financial markets worldwide. With regulated entities in the UK, Singapore, Cyprus, Australia, and the UAE, and additional operational teams in Denmark and Bulgaria, we combine international reach with local expertise.
Our business spans multi-asset liquidity, risk management, and cutting-edge trading platforms, supporting clients in achieving more efficient and sustainable growth. At the core of our success is our commitment to innovation, operational excellence, and robust governance.
As part of a global financial services group, Finalto combines the scale and stability of an established organisation with the agility of a fintech innovator. We are driven by collaboration, integrity, and performance.
Joining Finalto means becoming part of a diverse and dynamic team where your contributions have real impact. We invest in our people, offering opportunities for professional growth, international exposure, and the chance to shape the future of trading technology and liquidity solutions.
Role Description
As a Data Engineer at Finalto, you will design and build scalable data platforms and pipelines using modern technologies, enabling the organisation to unlock the full potential of its data for analytics and AI.
This role requires a high level of ownership and accountability, with responsibility for delivering scalable data solutions across the data engineering lifecycle.
Responsibilities
- Build and optimise scalable data pipelines using PySpark and SQL
- Design and implement ETL/ELT processes for batch and streaming data
- Develop data solutions using Databricks Lakehouse and Delta Lake
- Ingest and integrate data from internal and external sources (e.g. Kafka, CDC)
- Optimise Spark jobs and data workflows for performance, scalability, and cost efficiency
- Manage infrastructure and environments using Terraform (IaC)
- Ensure data quality, monitoring, and reliability
- Implement governance and access controls (e.g. Unity Catalog)
- Deliver clean, structured, and accessible data for analytics and business use
- Collaborate with cross-functional teams to support analytics, reporting, and AI/ML initiatives
Requirements
- Demonstrated experience in data engineering, with a proven ability to build scalable data solutions
- Strong proficiency in Python and SQL
- Hands-on experience with Apache Spark (including Structured Streaming)
- Experience with Databricks (Workflows, Delta Live Tables, Lakehouse architecture)
- Experience with cloud platforms (AWS, Azure, or GCP)
- Experience with Terraform or similar infrastructure-as-code tools
- Experience working with structured and semi-structured data (e.g. JSON)
- Familiarity with CI/CD, modular development, and code documentation
- Strong communication skills and ability to work independently with a high level of ownership
Desired Skills
- Databricks Certified Data Engineer (Associate or Professional)
- Experience with Kafka, DBT, or similar data tools
- Knowledge of Scala or other programming languages
- Experience working in an Agile environment
- Interest in AI, machine learning, and data-driven technologies
Preferred Qualifications
- A degree in STEM or Computer Science