Data Engineer
Full-time in West London office 5 days a week, great salary and benefits package offered.
The Data Engineer with Commodity Trading and Data Risk experience will focus on building, managing, and optimizing risk analytics workflows. You will play a key role in designing and evolving data pipelines, models, and tooling that support critical risk processes, while also contributing to the broader data platform leveraging Microsoft Fabric. Working closely with the platform team, you will help develop shared infrastructure, scalable ingestion frameworks, and high-quality data products.
This role sits at the intersection of data engineering, risk technology and modern data platform practices within a global, always-on trading environment.
Job Accountabilities:
• Understand risk workflows end-to-end and translate them into reliable, production-grade data pipelines and products.
• Build and maintain batch and near-real-time data ingestion pipelines from diverse sources including relational databases, REST APIs, FTP/SFTP feeds, and cloud storage.
• Contribute to the data platform delivering harmonised, governed data products that serve multiple business functions.
• Collaborate with risk, analytics, and engineering teams to productionize and maintain risk models and scripts.
• Implement best practices for code quality, testing, and release management across the data platform.
• Build and support Power BI semantic models and DirectLake datasets.
• Manage and maintain Azure DevOps pipelines for deployment, version control, and CI/CD of risk-related scripts and data workflows.
• Monitor system performance and troubleshoot issues related to data pipelines and deployments.
• Ensure proper data governance, security, and compliance standards are applied.
Required Skills & Experience:
• Hands-on experience with Microsoft Fabric, Azure data services (e.g., Synapse Analytics, Data Factory), or Databricks for large-scale data processing.
• Proficiency in Python / SQL for data engineering and scripting.
• Familiarity with risk analytics environments or financial data.
• Strong experience with Apache Spark (Spark Engine), including performance optimization and distributed data processing.
• Strong experience ingesting data from diverse sources including relational databases, REST APIs, FTP/SFTP file feeds, and cloud storage.
• Experience managing data pipelines and production workflows.
• Experience with Azure DevOps (CI/CD pipelines, repos, release management).
• Experience with version control (Git) and software development lifecycle practices.
Nice to Have:
• Experience with streaming data technologies (e.g. Kafka, Azure Event Hubs).
• Exposure to metadata-driven framework design and config-driven pipeline development.
• Knowledge of non-relational databases (e.g. MongoDB, Cosmos DB).
• Familiarity with Data Mesh principles and domain-oriented data ownership.
• Experience with monitoring/logging tools in Azure.
With a focus within Energy Trading, Oil & Gas, Financial Markets and Commodities, we offer a transparent Recruitment Service that has proven to be reliable and effective for over 40 years. We are ISO accredited and proud of our excellent TrustPilot Reviews. Your search for a New Contract Assignment or for a New Permanent Job will be in safe hands with Eaglecliff Recruitment. Please telephone for an immediate response or email your CV for a quick response. Eaglecliff Ltd is acting in the capacity of an employment agency for permanent recruitment and an employment business for contractor resourcing.