DemandTrendPermanent Lead Data Engineer Jobs in the Thames Valley

2 of 2 DemandTrendPermanent Lead Data Engineer Jobs in the Thames Valley

Lead Data Engineer

Reading, Berkshire, South East, United Kingdom
Hybrid / WFH Options
Bowerford Associates
We are searching for an experienced Lead Data Engineer for an exciting data-driven business with multiple office locations across the UK. The role is offered as a remote position, but you MUST be based in the UK to be considered for the opportunity. In this role you will be involved in determining the future direction … and impact of data engineering within the business. This is an extremely exciting Lead Data Engineering role, a fantastic opportunity for an experienced, innovative and hands-on data professional to help shape our client's products and solutions. Reporting directly to the Head of Data Engineering, you will play a crucial role in driving the … team's vision and objectives to completion. You will be expected to provide technical leadership, own the solution, ensure the reliability of data products, and collaborate closely with your team and other teams to optimise data solutions. This is a fantastic opportunity for highly skilled and motivated Lead Data Engineer with strong expertise in data More ❯
Employment Type: Permanent, Work From Home
Salary: £80,000
Posted:

AWS Lead Data Engineer

slough, south east england, united kingdom
HCLTech
Retail and CPG, and Public Services. Consolidated revenues as of 12 months ending December 2024 totaled $13.8 billion. Job Summary: We are seeking a highly skilled and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in … PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper … access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and maintain CI/CD pipelines for data workflows using tools like AWS CodePipeline, Git, GitHub Actions. • Ensure data quality, lineage, and More ❯
Posted: