Java exposure beneficial Delta Lake/Delta table optimisation experience Git/GitLab, CI/CD pipelines, DevOps practices Strong troubleshooting and problem-solving ability Experience with lakehouse architectures, ETL workflows, and distributed computing Familiarity with time-series, market data, transactional data or risk metrics Nice to Have Power BI dataset preparation OneLake, Azure Data Lake, Kubernetes, Docker Knowledge of More ❯
broad technical skills and ability to work with large amounts of data. You will collaborate with the Game and Product[1] teams to implement data strategies and develop complex ETL pipelines that support dashboards for promoting deeper understanding of our games. You will have experience developing and establishing scalable, efficient, automated processes for large-scale data analyses. You will also … and design solutions to support product analytics, business analytics and advance data science ● Design efficient and scalable data pipelines using cloud-native and open source technologies ● Develop and improve ETL/ELT processes to ingest data from diverse sources. ● You will work with analysts, understand requirements, develop technical specifications for ETLs, including documentation. ● You will support production code to produce … junior engineers, and contribute to team knowledge sharing. ● Document data processes, architecture, and workflows for transparency and maintainability. ● You will work with big data solutions, data modelling, understand the ETL pipelines and dashboard tools Required Qualifications: ● 4+ years relevant industry experience in a data engineering role and graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field More ❯
JOIN, WHERE, basic troubleshooting). Knowledge of ERP systems and business processes such as inventory, BOM, and work orders. Exposure to data migration or integration tasks (CSV/ODBC, ETL). Excellent communication and problem-solving skills. Familiar with Microsoft Office and ideally Power BI, Power Automate, or scripting (PowerShell) . Understanding of data protection, access control, and change management More ❯
/desirable/advantageous] • Ability to create and maintain effective working relationships with stakeholders [at all levels] • Middle layer/API/Backend system integration/Basis Database andETL knowledge • Experience in AWS/Azure Cloud and on-premises application hosting , at least 1 project • Experience in Web app and infra security • Non-Functional Requirements • Front end exposure is More ❯
Atherstone, Warwickshire, England, United Kingdom Hybrid/Remote Options
Big Red Recruitment
Data Platform & Engineering team, you’ll join as the expert within Azure Databricks. You'll help to upskill our teams knowledge of Databricks, get involved in data modelling andETL pipeline development, integrations, and performance tuning. This is an ever evolving project as the business becomes more and more data driven and your knowledge of Databricks will be pivitol in … highly visible and strategic data platform project.What you’ll be doing: Leading the design and implementation of a new Databricks-based data warehousing solution Designing and developing data models, ETL pipelines, and data integration processes Large scale data processing using PySpark Monitoring, tuning, and optimising data platforms for reliability and performance Upskilling the wider team in Databricks best practices, including More ❯
Warwickshire, United Kingdom Hybrid/Remote Options
Big Red Recruitment Midlands Limited
Data Platform & Engineering team, you ll join as the expert within Azure Databricks. You'll help to upskill our teams knowledge of Databricks, get involved in data modelling andETL pipeline development, integrations, and performance tuning. This is an ever evolving project as the business becomes more and more data driven and your knowledge of Databricks will be pivitol in … visible and strategic data platform project. What you ll be doing: Leading the design and implementation of a new Databricks-based data warehousing solution Designing and developing data models, ETL pipelines, and data integration processes Large scale data processing using PySpark Monitoring, tuning, and optimising data platforms for reliability and performance Upskilling the wider team in Databricks best practices, including More ❯