Data Engineer (Big Data/ Hadoop/ Spark Dev)
Your new company Working for a renowned financial services organisation Your new role We're looking for a Data Engineer to design and deliver scalable on prem, high-quality data solutions for low/ high-level data platforms that power analytical and business insights. This is a hands-on role suited to someone with strong data engineering and big data expertise, ideally gained within financial services.Using , Big Data tools and Spark development you will ensure data quality through automated validation, monitoring, and testing. You will also enable seamless integration across data warehouses and data lakes, contributing to a robust, scalable, and resilient enterprise data ecosystem.What you'll need to succeed
- Strong Data Engineering expertise with Big Data technologies.
- Experience designing and building on-prem data platforms, from high-level architecture to detailed technical design.
- Strong Spark development experience.
- Hands-on experience with Hadoop - configuring multi-node Hadoop clusters, including resource management, security, and performance tuning.
- Strong Big Data engineering background using Apache Airflow, Spark, dbt, Kafka, and Hadoop ecosystem tools.
- Knowledge of RDBMS systems (PostgreSQL, SQL Server) and familiarity with NoSQL/distributed databases such as MongoDB.
- Proven delivery of streaming pipelines and real-time data processing solutions.
- Delivered streaming pipelines and real-time data processing solutions.
What you'll get in return Flexible working options available.What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.
Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk