generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: • Designing and developing scalable, testable data pipelines using Python and Apache Spark • Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 • Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing … Contributing to the development of a lakehouse architecture using ApacheIceberg • Collaborating with business teams to translate requirements into data-driven solutions • Building observability into data flows and implementing basic quality checks • Participating in code reviews, pair programming, and architecture discussions • Continuously learning about the financial indices domain and sharing insights with the team WHAT YOU'LL BRING … ideally with type hints, linters, and tests like pytest) Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines Has experience with or is eager to learn Apache Spark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders • Works More ❯
City of London, London, United Kingdom Hybrid/Remote Options
N Consulting Global
generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: • Designing and developing scalable, testable data pipelines using Python and Apache Spark • Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 • Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing … Contributing to the development of a lakehouse architecture using ApacheIceberg • Collaborating with business teams to translate requirements into data-driven solutions • Building observability into data flows and implementing basic quality checks • Participating in code reviews, pair programming, and architecture discussions • Continuously learning about the financial indices domain and sharing insights with the team WHAT YOU'LL BRING … ideally with type hints, linters, and tests like pytest) Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines Has experience with or is eager to learn Apache Spark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders • Works More ❯
experience in a leadership or technical lead role, with official line management responsibility. Strong experience with modern data stack technologies, including Python, Snowflake, AWS (S3, EC2, Terraform), Airflow, dbt, Apache Spark, ApacheIceberg, and Postgres. Skilled in balancing technical excellence with business priorities in a fast-paced environment. Strong communication and stakeholder management skills, able to translate More ❯
Focus Areas Strong experience with .NET/C# (backend-focused) Hands-on with analytical databases such as ClickHouse, SingleStore, Rockset, or TimescaleDB Experience with open-standard data lakes (e.g. ApacheIceberg, Delta Tables, Apache Spark, column stores) Comfortable working with large-scale data ingestion and processing (batch or streaming) Data Developer – Hybrid - £800/day Inside IR35 More ❯
City of London, London, United Kingdom Hybrid/Remote Options
VirtueTech Recruitment Group
Focus Areas Strong experience with .NET/C# (backend-focused) Hands-on with analytical databases such as ClickHouse, SingleStore, Rockset, or TimescaleDB Experience with open-standard data lakes (e.g. ApacheIceberg, Delta Tables, Apache Spark, column stores) Comfortable working with large-scale data ingestion and processing (batch or streaming) Data Developer – Hybrid - £800/day Inside IR35 More ❯
Greetings! Adroit People is currently hiring Title: Senior AWS Data Engineer Location: London, UK Work Mode: Hybrid-3 DAYS/WEEK Duration: 12 Months FTC Keywords: AWS,PYTHON,APACHE,SPARK,ETL Job Spec: We are building the next-generation data platform at FTSE Russell and we want you to shape it with us. Your role will involve: Designing and … developing scalable, testable data pipelines using Python and Apache Spark Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing Contributing to the development of a lakehouse architecture using ApacheIceberg Collaborating with business teams to translate requirements into More ❯
generation data platform. What You’ll Get A key engineering role within a world-class technology organisation that values innovation and impact. Exposure to a modern data ecosystem including Iceberg, Kafka, Airflow, and other open-source technologies. A collaborative, intellectually curious culture where engineers are trusted to take ownership and drive outcomes. Excellent compensation package with strong performance incentives. More ❯
generation data platform. What You’ll Get A key engineering role within a world-class technology organisation that values innovation and impact. Exposure to a modern data ecosystem including Iceberg, Kafka, Airflow, and other open-source technologies. A collaborative, intellectually curious culture where engineers are trusted to take ownership and drive outcomes. Excellent compensation package with strong performance incentives. More ❯
businesses and gaining an overview of many different sectors. What We’re Looking For 5 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for data engineering More ❯
businesses and gaining an overview of many different sectors. What We’re Looking For 5 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for data engineering More ❯
businesses and gaining an overview of many different sectors. What We’re Looking For 10 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for data engineering More ❯
businesses and gaining an overview of many different sectors. What We’re Looking For 10 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for data engineering More ❯
e.g., KDB, OneTick) and Parquet-based file storage to optimize data access and retrieval. Design scalable cloud-native solutions (AWS preferred) for market data ingestion and distribution. (Bonus) Integrate ApacheIceberg for large-scale data lake management and versioned data workflows. Collaborate with trading and engineering teams to define data requirements and deliver production-grade solutions. Implement robust … systems. Strong Python skills and familiarity with cloud platforms (AWS, GCP, or Azure). Experience with tick data and building tick data pipelines. Proficiency with Parquet-based file storage; Iceberg experience is a plus. Familiarity with Kubernetes, containerization, and modern orchestration tools. Experience with time-series databases (KDB, OneTick) and C++ is a plus. Strong problem-solving skills and More ❯