Data Engineer
- Hiring Organisation
- Stable
- Location
- Reading, England, United Kingdom
maintaining data-streaming pipelines (e.g., Kafka, Spark Streaming, or equivalent). Proficiency with Databricks and Apache Spark for large-scale data processing. Experience with Hadoop and/or wider Apache ecosystem tools (e.g., Kafka, NiFi, Airflow). Strong understanding of Product Data Management (PDM), PLM, and/or engineering … regulated, and multi-national environments. Key Responsibilities: Design, build, and optimise data ingestion, transformation, and integration pipelines. Implement scalable processing frameworks using Databricks, Spark, Hadoop, and Apache technologies. Develop and support real-time and near-real-time data-streaming pipelines for engineering and operational data. Integrate engineering datasets across ...