Contract Data Pipeline Jobs in Coventry

2 of 2 Contract Data Pipeline Jobs in Coventry

PySpark Developer

Coventry, Warwickshire, United Kingdom
DCV Technologies
and Azure engineering skills to join a major transformation programme within the financial-markets domain. This role is fully hands-on, focused on building and optimising large-scale data pipelines, dataflows, semantic models, and lakehouse components. Key Responsibilities Design, build and optimise Spark-based data pipelines for batch and streaming workloads Develop Fabric dataflows, pipelines, and … semantic models Implement complex transformations, joins, aggregations and performance tuning Build and optimise Delta Lake/delta tables Develop secure data solutions including role-based access, data masking and compliance controls Implement data validation, cleansing, profiling and documentation Work closely with analysts and stakeholders to translate requirements into scalable technical solutions Troubleshoot and improve … and workload performance Essential Skills Strong hands-on experience with PySpark, Spark SQL, Spark Streaming, DataFrames Microsoft Fabric (Fabric Spark jobs, dataflows, pipelines, semantic models) Azure: ADLS, cloud data engineering, notebooks Python programming; Java exposure beneficial Delta Lake/Delta table optimisation experience Git/GitLab, CI/CD pipelines, DevOps practices Strong troubleshooting and problem-solving ability More ❯
Employment Type: Contract
Rate: GBP 35 - 55 Hourly
Posted:

PySpark Developer

Coventry, West Midlands (County), United Kingdom
DCV Technologies
and Azure engineering skills to join a major transformation programme within the financial-markets domain. This role is fully hands-on, focused on building and optimising large-scale data pipelines, dataflows, semantic models, and lakehouse components. Key Responsibilities Design, build and optimise Spark-based data pipelines for batch and streaming workloads Develop Fabric dataflows, pipelines, and … semantic models Implement complex transformations, joins, aggregations and performance tuning Build and optimise Delta Lake/delta tables Develop secure data solutions including role-based access, data masking and compliance controls Implement data validation, cleansing, profiling and documentation Work closely with analysts and stakeholders to translate requirements into scalable technical solutions Troubleshoot and improve … and workload performance Essential Skills Strong hands-on experience with PySpark, Spark SQL, Spark Streaming, DataFrames Microsoft Fabric (Fabric Spark jobs, dataflows, pipelines, semantic models) Azure: ADLS, cloud data engineering, notebooks Python programming; Java exposure beneficial Delta Lake/Delta table optimisation experience Git/GitLab, CI/CD pipelines, DevOps practices Strong troubleshooting and problem-solving ability More ❯
Employment Type: Contract
Rate: £35 - £55/hour
Posted:
Data Pipeline
Coventry
25th Percentile
£381
Median
£388
75th Percentile
£394