Data Engineer
Company Description
Finalto is a global leader in liquidity provision and trading technology solutions, serving institutional and B2B clients across financial markets worldwide. With regulated entities in the UK, Singapore, Cyprus, Australia, and the UAE, and additional operational teams in Denmark and Bulgaria, we combine international reach with local expertise.
Our business spans multi-asset liquidity, risk management, and cutting-edge trading platforms, supporting
clients in achieving more efficient and sustainable growth. At the core of our success is our commitment to
innovation, operational excellence, and robust governance.
As part of a global financial services group, Finalto combines the scale and stability of an established
organisation with the agility of a fintech innovator. We are driven by collaboration, integrity, and performance.
Joining Finalto means becoming part of a diverse and dynamic team where your contributions have real
impact. We invest in our people, offering opportunities for professional growth, international exposure, and the chance to shape the future of trading technology and liquidity solutions.
Role Description
As a Data Engineer at Finalto, you will be responsible for building data pipelines from various sources using a cutting-edge modern tech stack. You will work with technologies such as AWS and Databricks to ensure the Finalto unlocks the full potential of its data.
Key Responsibilities
· Build and maintain scalable data pipelines
· Develop and optimise data models for analytics and reporting
· Ingest and integrate data from multiple internal and external sources
· Ensure data quality, reliability, and monitoring of pipelines
· Support stakeholders by delivering clean, accessible, and well-structured data
· Research, prototype, and support implementation of AI solutions
Requirements
· Strong proficiency in Python and SQL
· Experience with lakehouse/data platforms (Databricks, BigQuery) and building data pipelines/architectures (DBT, DLT)
· Experience with batch and streaming ingestion (CDC, Kafka) and distributed processing (Spark)
· Experience working with cloud infrastructure (AWS: S3, Lambda, EC2, DMS)
· Experience handling structured and semi-structured data (e.g. JSON)
· Familiarity with CI/CD, modular development, and code documentation
· Strong communication and teamwork skills
· Interest in AI, machine learning, and data-driven tools
Desired Skills
· Experience with Databricks and DBT
· Experience with other programming languages (C++, SAS, R, Scala, etc.)
· Intellectually curious and with a passion for continuous learning
· Experience working within an Agile environment
· Has numerical and analytical skills
Preferred Qualifications
· A degree on STEM or Computer Science