3 of 3 Contract Change Data Capture Jobs in London

Lead PySpark Engineer - Data, SAS, AWS

Hiring Organisation
Randstad Digital
Location
London, United Kingdom
Employment Type
Contract
Contract Rate
£350 - £380 per day
Lead Data Engineer - Pyspark/AWS/Python/SAS - Financial Sector As a Lead PySpark Engineer, you will design, develop, and fix complex data processing solutions using PySpark on AWS. You will work hands-on with code, modernising legacy data workflows and supporting large-scale … PySpark migrations. The role requires strong engineering discipline, deep data understanding, and the ability to deliver production-ready data pipelines in a financial services environment. Essential Skills PySpark & Data Engineering Minimum 5+ years of hands-on PySpark experience. SAS to Pyspark migration experience Proven ability to write ...

Lead PySpark Engineer

Hiring Organisation
SKILLFINDER INTERNATIONAL
Location
London, United Kingdom
Employment Type
Contract
Contract Rate
GBP Annual
Lead PySpark Engineer As a Lead PySpark Engineer , you will design, develop, and optimise complex data processing solutions on AWS. You will work hands-on with PySpark, modernise Legacy data workflows, and support large-scale SAS-to-PySpark migration programmes. This role requires strong engineering discipline, deep data expertise, and the ability to deliver production-ready data pipelines within a financial services environment. Skill Profile: PySpark - P3 (Advanced) AWS - P3 (Advanced) SAS - P1 (Foundational) Key Responsibilities Technical Delivery Design, develop, and fix complex PySpark code for ETL/ELT and data-mart workloads. Convert ...

Data Engineer (Azure)

Hiring Organisation
Hays Specialist Recruitment Limited
Location
London, South East, England, United Kingdom
Employment Type
Contractor
Contract Rate
£450 - £550 per day
Your new company Working for a renowned financial services organisation Your new role Seeking a Data Engineer to help design and maintain scalable … batch and near-real-time ingestion pipelines, modernizing legacy ETL/ELT processes into Azure and Snowflake, and implementing best-practice patterns such as CDC, incremental loading, schema evolution, and automated ingestion frameworks. They build cloud-native solutions using Azure Data Factory/Synapse, Databricks/Spark, ADLS Gen2 ...