Data Engineer - Contract - 9+ Months
Data Engineer (Contract)
9+ Month Contract based in Sheffield
£395 - £442 per day InsideIR35
BPSS clearance required - candidates must be eligible
My client is seeking a Data Engineer to design and operate large-scale telemetry and observability data pipelines within a modern OpenShift and Kafka ecosystem. This role is central to enabling proactive, Level 4 observability, delivering high-quality metrics, logs, and traces to support platform reliability, operational insight, and automation.
Responsibilities:
* Design, implement and maintain scalable data pipelines to ingest and process
OpenShift telemetry (metrics, logs, traces)
* Stream telemetry through Kafka (producers, topics, schemas) and build resilient
consumer services for enrichment and transformation
* Engineer multi-tenant observability data models, ensuring data lineage, quality
controls and SLAs across streaming layers
* Integrate processed telemetry into Splunk for dashboards, analytics, alerting and
operational insights
* Implement schema management and governance using Avro/Protobuf, including versioning
and compatibility strategies
* Build automated validation, replay and backfill mechanisms to ensure data
reliability and recovery
* Instrument services using OpenTelemetry, standardising tracing, metrics and
structured logging
* Apply LLMs to enhance observability, such as query assistance, anomaly summarisation
and runbook generation
* Collaborate with Platform, SRE and Application teams to align telemetry, alerts and
SLOs
* Ensure pipelines meet security, compliance and best-practice standards
* Produce clear documentation covering data flows, schemas, dashboards and operational
runbooks
Skills & Experience:
* Strong hands-on experience building streaming data pipelines with Kafka
(producers/consumers, schema registry, Kafka Connect, KSQL/KStreams)
* Experience with OpenShift / Kubernetes telemetry, including OpenTelemetry and
Prometheus
* Proven capability integrating telemetry into Splunk
(HEC, Universal Forwarders, sourcetypes, CIM, dashboards, alerting)
* Solid data engineering skills in Python (or similar) for ETL/ELT, enrichment and
validation
Please apply for immediate interview!
CBSbutler is operating and advertising as an Employment Agency for permanent positions and as an Employment Business for interim / contract / temporary positions. CBSbutler is an Equal Opportunities employer and we encourage applicants from all backgrounds.
9+ Month Contract based in Sheffield
£395 - £442 per day InsideIR35
BPSS clearance required - candidates must be eligible
My client is seeking a Data Engineer to design and operate large-scale telemetry and observability data pipelines within a modern OpenShift and Kafka ecosystem. This role is central to enabling proactive, Level 4 observability, delivering high-quality metrics, logs, and traces to support platform reliability, operational insight, and automation.
Responsibilities:
* Design, implement and maintain scalable data pipelines to ingest and process
OpenShift telemetry (metrics, logs, traces)
* Stream telemetry through Kafka (producers, topics, schemas) and build resilient
consumer services for enrichment and transformation
* Engineer multi-tenant observability data models, ensuring data lineage, quality
controls and SLAs across streaming layers
* Integrate processed telemetry into Splunk for dashboards, analytics, alerting and
operational insights
* Implement schema management and governance using Avro/Protobuf, including versioning
and compatibility strategies
* Build automated validation, replay and backfill mechanisms to ensure data
reliability and recovery
* Instrument services using OpenTelemetry, standardising tracing, metrics and
structured logging
* Apply LLMs to enhance observability, such as query assistance, anomaly summarisation
and runbook generation
* Collaborate with Platform, SRE and Application teams to align telemetry, alerts and
SLOs
* Ensure pipelines meet security, compliance and best-practice standards
* Produce clear documentation covering data flows, schemas, dashboards and operational
runbooks
Skills & Experience:
* Strong hands-on experience building streaming data pipelines with Kafka
(producers/consumers, schema registry, Kafka Connect, KSQL/KStreams)
* Experience with OpenShift / Kubernetes telemetry, including OpenTelemetry and
Prometheus
* Proven capability integrating telemetry into Splunk
(HEC, Universal Forwarders, sourcetypes, CIM, dashboards, alerting)
* Solid data engineering skills in Python (or similar) for ETL/ELT, enrichment and
validation
Please apply for immediate interview!
CBSbutler is operating and advertising as an Employment Agency for permanent positions and as an Employment Business for interim / contract / temporary positions. CBSbutler is an Equal Opportunities employer and we encourage applicants from all backgrounds.