10+ years' experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions. Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data More ❯
10+ years' experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions. Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data More ❯
10+ years' experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions. Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data More ❯
Greater Manchester, North West, United Kingdom Hybrid/Remote Options
Searchability (UK) Ltd
Enhanced Maternity & Paternity Charity Volunteer Days Cycle to work scheme And More.. DATA ENGINEER - ESSTENTIAL SKILLS Proven experience building data pipelines using Databricks . Strong understanding of Apache Spark (PySpark or Scala) and Structured Streaming . Experience working with Kafka (MSK) and handling real-time data . Good knowledge of Delta Lake/Delta Live Tables and the Medallion More ❯
design and optimise pipelines for large-scale data processing. Hands-on experience with Databricks and Azure. Strong stakeholder communication and problem-solving skills. Tech Stack Required: Databricks DBT PythonPySpark SQL Azure Bonus: Experience in eCommerce environments. If you are interested please apply below More ❯
design and optimise pipelines for large-scale data processing. Hands-on experience with Databricks and Azure. Strong stakeholder communication and problem-solving skills. Tech Stack Required: Databricks DBT PythonPySpark SQL Azure Bonus: Experience in eCommerce environments. If you are interested please apply below More ❯
you will have: Proven leadership experience within data engineering or data consultancy. Hands-on experience with Microsoft Fabric, Azure Synapse, Data Factory and Databricks. Strong SQL and Python/PySpark skills. Experience designing data lakes, warehouses and scalable data models for BI tools like Power BI. Solid understanding of data architecture and optimisation techniques for large-scale solutions. Some More ❯
Stockport, Greater Manchester, UK Hybrid/Remote Options
Tenth Revolution Group
you will have: Proven leadership experience within data engineering or data consultancy. Hands-on experience with Microsoft Fabric, Azure Synapse, Data Factory and Databricks. Strong SQL and Python/PySpark skills. Experience designing data lakes, warehouses and scalable data models for BI tools like Power BI. Solid understanding of data architecture and optimisation techniques for large-scale solutions. Some More ❯
Bolton, Greater Manchester, UK Hybrid/Remote Options
Tenth Revolution Group
you will have: Proven leadership experience within data engineering or data consultancy. Hands-on experience with Microsoft Fabric, Azure Synapse, Data Factory and Databricks. Strong SQL and Python/PySpark skills. Experience designing data lakes, warehouses and scalable data models for BI tools like Power BI. Solid understanding of data architecture and optimisation techniques for large-scale solutions. Some More ❯
Manchester, Lancashire, England, United Kingdom Hybrid/Remote Options
Lorien
a blend of the following: Strong knowledge of AWS data services (Glue, S3, Lambda, Redshift, etc.) Solid understanding of ETL processes and data pipeline management Proficiency in Python and PySpark Experience working with SQL-based platforms Previous involvement in migrating on-premise solutions to cloud is highly desirable Excellent collaboration skills and ability to mentor others The Benefits: Salary More ❯
on-premises databases and cloud). Maintain/implement data warehousing solutions and manage large-scale data storage systems (e.g. Microsoft Fabric) Build and optimise SQL queries, stored procedures, PySpark notebooks and database objects to ensure data performance and reliability. Migrate and modernise legacy databases to cloud-based architectures. Database Administration Administer, monitor, and optimise database systems (e.g. SQL … level SQL and database design (normalisation, indexing, query optimisation). Strong experience with ETL/ELT tools. E.g. Azure Data Factory, Databricks, Synapse Pipelines, SSIS, etc. Experience with Python, PySpark, or Scala for data processing. Familiarity with CI/CD practices. Experience with Data lake, Data warehouse and Medallion architectures. Understanding of API integrations and streaming technologies (event hub More ❯