end to end solutions for an array of customer data projects. The role: Designing data pipelines, managing data warehouses, and implementing complexETL processes. Work closely with data scientists, analysts, and stakeholders to optimise our client’s infrastructure. Optimise data driven products, personalisation, reporting and overall business more »
and implementing data models and algorithms to support data science and machine learning initiatives . Technical Requirements Proven track record leading complexETLand Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributed computing frameworks such more »
to also add GCP and on-prem. They are adding extensive usage of distributed compute on Spark, starting with their more complexETLand advanced analytics functions, e.g. Time Series Processing. They soon plan to integrate other approaches, including native distributed PyTorch/Tensorflow, Spark-based distributor … stacks Shape the ML engineering culture and practices around model & data versioning, scalability, model benchmarking, ML-specific branching & release strategy Concisely break down complex high-level ML requirements into smaller deliverables (epic à task) Proactively identify blockers across the team, help promptly and effectively to unblock them Bring more »
City of London, London, United Kingdom Hybrid / WFH Options
Sanderson Recruitment
to take your career to the next level: Keys skills/Experience: Strong Visualisation experience using Power BI Comfortable setting up complexETL processes Strong experience around SSIS and Azure Data Factory Azure Synapse, Azure Data Factory & Data Lakes DevOps and Deployments knowledge and exposure Solid communication more »