Python Data Engineer Jobs in the City of London

5 of 5 Python Data Engineer Jobs in the City of London

Market Data Engineer (Python) | Systematic Trading

City Of London, England, United Kingdom
Selby Jennings
A leading global investment firm is seeking a Market Data Engineer to join its London-based team and help build robust, scalable tick data infrastructure that powers systematic trading strategies across asset classes. This is a high-impact engineering role focused on designing and optimizing real-time and historical market data pipelines in a cloud-native … environment. You'll work closely with trading and quant teams to ensure data is clean, fast, and accessible - enabling research, signal generation, and execution at scale. What You'll Do Build and maintain high-performance tick data pipelines for ingesting, processing, and storing large volumes of market data. Work with time-series databases (e.g., KDB, OneTick) and Parquet … based file storage to optimize data access and retrieval. Design scalable cloud-native solutions (AWS preferred) for market data ingestion and distribution. (Bonus) Integrate Apache Iceberg for large-scale data lake management and versioned data workflows. Collaborate with trading and engineering teams to define data requirements and deliver production-grade solutions. Implement robust data More ❯
Posted:

Market Data Engineer (Python) | Systematic Trading

london (city of london), south east england, united kingdom
Selby Jennings
A leading global investment firm is seeking a Market Data Engineer to join its London-based team and help build robust, scalable tick data infrastructure that powers systematic trading strategies across asset classes. This is a high-impact engineering role focused on designing and optimizing real-time and historical market data pipelines in a cloud-native … environment. You'll work closely with trading and quant teams to ensure data is clean, fast, and accessible - enabling research, signal generation, and execution at scale. What You'll Do Build and maintain high-performance tick data pipelines for ingesting, processing, and storing large volumes of market data. Work with time-series databases (e.g., KDB, OneTick) and Parquet … based file storage to optimize data access and retrieval. Design scalable cloud-native solutions (AWS preferred) for market data ingestion and distribution. (Bonus) Integrate Apache Iceberg for large-scale data lake management and versioned data workflows. Collaborate with trading and engineering teams to define data requirements and deliver production-grade solutions. Implement robust data More ❯
Posted:

Senior Data Engineer - Python / Databricks

City of London, London, United Kingdom
Hybrid / WFH Options
Hunter Bond
My leading financial services client are looking for a Senior Data Engineer to help develop and expand their global analytics platform, supporting their Commodities front office trading business. You'll work with business stakeholders to understand requirements, design and build robust data pipelines and deliver end-to-end analytics and ML/AI capabilities. This is a … long-term contract role. The following skills/experience is essential: Strong expertise in Python, PySpark, and SQL Extensive hands-on experience with Databricks Familiarity with Data Science and Machine Learning frameworks Previously worked in Financial Services Trading environment Background in DevOps and Infrastructure as Code (IaC) Commodities industry experience is highly desirable Rate: Up to £700/… day Duration: 12 months + Location: London (good work from home options available) If you are interested in this Senior Data Engineer position and meet the above requirements please apply immediately. More ❯
Posted:

Senior Data Engineer (with Python, PySpark and AWS)

City of London, London, United Kingdom
Luxoft
We're seeking a highly skilled and motivated Senior Data Engineer to join our growing data team. In this role, you'll architect and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake … and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion, transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support … analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate data workflows and processes to improve efficiency and reduce manual intervention - Monitor pipeline performance, identify bottlenecks, and resolve issues proactively - Apply best practices in CI/ More ❯
Posted:

Senior Data Engineer (with Python, PySpark and AWS)

london (city of london), south east england, united kingdom
Luxoft
We're seeking a highly skilled and motivated Senior Data Engineer to join our growing data team. In this role, you'll architect and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake … and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion, transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support … analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate data workflows and processes to improve efficiency and reduce manual intervention - Monitor pipeline performance, identify bottlenecks, and resolve issues proactively - Apply best practices in CI/ More ❯
Posted: