Newbury, Berkshire, England, United Kingdom Hybrid / WFH Options
Intuita
including Azure DevOps or GitHub Considerable experience designing and building operationally efficient pipelines, utilising core Cloud components, such as Azure Data Factory, Big Query, AirFlow, Google Cloud Composer and Pyspark etc Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use Strong understanding and or use of More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Client Server
are an experienced Data Engineer within financial services environments You have expertise with GCP including BigQuery, Pub/Sub, Cloud Composer and IAM You have strong Python, SQL and PySpark skills You have experience with real-time data streaming using Kafka or Spark You have a good knowledge of Data Lakes, Data Warehousing, Data Modelling You're familiar with More ❯
Newbury, Berkshire, England, United Kingdom Hybrid / WFH Options
Intuita
including Azure DevOps or GitHub Considerable experience designing and building operationally efficient pipelines, utilising core Cloud components, such as Azure Data Factory, Big Query, AirFlow, Google Cloud Composer and Pyspark etc Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use Strong understanding and or use of More ❯
leadership and upskilling responsibilities. Key Responsibilities Build and maintain Databricks Delta Live Tables (DLT) pipelines across Bronze → Silver → Gold layers, ensuring quality, scalability, and reliability. Develop and optimise Spark (PySpark) jobs for large-scale distributed processing. Design and implement streaming data pipelines with Kafka/MSK, applying best practices for late event handling and throughput. Use Terraform and CI More ❯
CI/CD adoption across teams. Act as a trusted advisor, simplifying technical concepts and communicating clearly with business stakeholders. Develop and maintain data pipelines using Azure ADF, Databricks, PySpark, and Delta Lake. Build and optimise workflows in Python and SQL to support supply chain, sales, and marketing analytics. Contribute to CI/CD pipelines using GitHub Actions (or More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Peaple Talent
having delivered in Microsoft Azure Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure Data Engineering certifications Databricks certifications What's in it for you: 📍Location: London (Hybrid) 💻Remote working: Occasional More ❯
and development plan beyond generic certifications. Provide a Rough Order of Magnitude (ROM) cost for implementing the proposed roadmap. Essential Deep expertise in the Databricks Lakehouse Platform, including Python, PySpark, and advanced SQL. Strong practical knowledge of Microsoft Fabric. Proven experience in senior, client-facing roles with a consultancy mindset. Background in technical coaching, mentorship, or skills assessment. Excellent More ❯
and machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, Delta Lake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
slough, south east england, united kingdom Hybrid / WFH Options
83zero
controls. AI & Technology Enablement Build tools and processes for metadata management, data quality, and data sharing. Leverage AI and automation tools to improve data governance capabilities. Use Python, SQL, PySpark, Power BI, and related tools for data processing and visualization. Strategy & Stakeholder Engagement Provide subject matter expertise in data governance and AI governance. Collaborate with business, data, and tech More ❯
estate and venture capital domain”. "• Lead the design and implementation of robust data architectures to support business needs and data strategy. • Utilize extensive experience in Azure Synapse, Python, PySpark, and ADF to architect scalable and efficient data solutions. • Oversee and optimize SSIS, SSRS, and SQL Server environments, ensuring high performance and reliability. • Write/review complex SQL queries More ❯
Reading, England, United Kingdom Hybrid / WFH Options
TP Embedded Solutions Ltd
technical architects and uplifting data capability across teams. What You’ll Bring Proven experience as a Data Architect, Lead Data Engineer, or similar. Hands-on expertise with SQL, Python, PySpark, and Azure data technologies. Strong understanding of data governance frameworks and compliance. Experience designing enterprise-scale data platforms. Excellent communication and mentoring skills. What You’ll Get Hybrid working More ❯
slough, south east england, united kingdom Hybrid / WFH Options
TP Embedded Solutions Ltd
technical architects and uplifting data capability across teams. What You’ll Bring Proven experience as a Data Architect, Lead Data Engineer, or similar. Hands-on expertise with SQL, Python, PySpark, and Azure data technologies. Strong understanding of data governance frameworks and compliance. Experience designing enterprise-scale data platforms. Excellent communication and mentoring skills. What You’ll Get Hybrid working More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯