51 to 55 of 55 PySpark Jobs in the Midlands

Data Specialist

Hiring Organisation
Vector Resourcing
Location
Worcester, Worcestershire, UK
Employment Type
Full-time
open-source data validation frameworks to ensure data accuracy, schema integrity, and quality across ingestion and transformation pipelines Test and validate data pipelines and PySpark notebooks developed by Data Engineers, ensuring they meet quality, reliability, and validation standards Defining and standardizing monitoring, logging, alerting, and KPIs/SLAs across … setting up Alert rules, building dashboards using data queried (KQL) from Log Analytics. Experience with Fabric Data Factory, Azure Data Factory, Synapse pipelines, and PySpark notebooks Hands-on experience calling REST/OData APIs for validating data. Experience writing SQL and scripts for programmatically doing data validations and reconciliation ...

Data Specialist

Hiring Organisation
Vector Resourcing
Location
Wolverhampton, West Midlands, UK
Employment Type
Full-time
open-source data validation frameworks to ensure data accuracy, schema integrity, and quality across ingestion and transformation pipelines Test and validate data pipelines and PySpark notebooks developed by Data Engineers, ensuring they meet quality, reliability, and validation standards Defining and standardizing monitoring, logging, alerting, and KPIs/SLAs across … setting up Alert rules, building dashboards using data queried (KQL) from Log Analytics. Experience with Fabric Data Factory, Azure Data Factory, Synapse pipelines, and PySpark notebooks Hands-on experience calling REST/OData APIs for validating data. Experience writing SQL and scripts for programmatically doing data validations and reconciliation ...

Data Specialist

Hiring Organisation
Vector Resourcing
Location
Stoke-on-Trent, Staffordshire, UK
Employment Type
Full-time
open-source data validation frameworks to ensure data accuracy, schema integrity, and quality across ingestion and transformation pipelines Test and validate data pipelines and PySpark notebooks developed by Data Engineers, ensuring they meet quality, reliability, and validation standards Defining and standardizing monitoring, logging, alerting, and KPIs/SLAs across … setting up Alert rules, building dashboards using data queried (KQL) from Log Analytics. Experience with Fabric Data Factory, Azure Data Factory, Synapse pipelines, and PySpark notebooks Hands-on experience calling REST/OData APIs for validating data. Experience writing SQL and scripts for programmatically doing data validations and reconciliation ...

Senior Data Engineer

Hiring Organisation
Tenth Revolution Group
Location
Northampton, Northamptonshire, UK
Employment Type
Full-time
responsible for: Designing end-to-end data architecture aligned with modern best practices. Building and managing ingestion pipelines using Databricks and related tools. Developing PySpark/Spark SQL notebooks for complex transformations and cleansing. Applying governance, security, and CI/CD best practices across cloud environments. Leading technical discussions … producing professional documentation. To be successful in this role, you will have: Hands-on experience with Databricks including Unity Catalog. Strong PySpark/Spark SQL skills for large-scale transformations. Experience integrating with diverse data sources such as APIs, cloud storage and databases. Experience with the Azure cloud data ...

Senior Data Engineer

Hiring Organisation
Tenth Revolution Group
Location
Birmingham, England, United Kingdom
responsible for: Designing end-to-end data architecture aligned with modern best practices. Building and managing ingestion pipelines using Databricks and related tools. Developing PySpark/Spark SQL notebooks for complex transformations and cleansing. Applying governance, security, and CI/CD best practices across cloud environments. Leading technical discussions … producing professional documentation. To be successful in this role, you will have: Hands-on experience with Databricks including Unity Catalog. Strong PySpark/Spark SQL skills for large-scale transformations. Experience integrating with diverse data sources such as APIs, cloud storage and databases. Experience with the Azure cloud data ...