with multiple data analytics tools (e.g. Power BI) Deep understanding of data wareho using concepts, ETL/ELT pipelines and dimensional modelling Proficiency in advanced programming languages (Python/PySpark, SQL) Experience in data pipeline orchestration (e.g. Airflow, Data Factory) Familiarity with DevOps and CI/CD practices (Git, Azure DevOps etc) Ability to communicate technical concepts to both More ❯
Newbury, Berkshire, England, United Kingdom Hybrid / WFH Options
Intuita
including Azure DevOps or GitHub Considerable experience designing and building operationally efficient pipelines, utilising core Cloud components, such as Azure Data Factory, Big Query, AirFlow, Google Cloud Composer and Pyspark etc Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use Strong understanding and or use of More ❯
Description: Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
Description: Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Client Server
are an experienced Data Engineer within financial services environments You have expertise with GCP including BigQuery, Pub/Sub, Cloud Composer and IAM You have strong Python, SQL and PySpark skills You have experience with real-time data streaming using Kafka or Spark You have a good knowledge of Data Lakes, Data Warehousing, Data Modelling You're familiar with More ❯
Experience Active SC Clearance (mandatory at application stage). Proven expertise in: Azure Data Factory & Azure Synapse Azure DevOps & Microsoft Azure ecosystem Power BI (including semantic models) Python (incl. PySpark) and advanced SQL dbt with SQL DBs (data transformation & modelling) Dimension data modelling Terraform for infrastructure-as-code deployments Strong experience with both structured and unstructured data. Delivery track More ❯
warrington, cheshire, north west england, united kingdom
Scrumconnect Consulting
Experience Active SC Clearance (mandatory at application stage). Proven expertise in: Azure Data Factory & Azure Synapse Azure DevOps & Microsoft Azure ecosystem Power BI (including semantic models) Python (incl. PySpark) and advanced SQL dbt with SQL DBs (data transformation & modelling) Dimension data modelling Terraform for infrastructure-as-code deployments Strong experience with both structured and unstructured data. Delivery track More ❯
bolton, greater manchester, north west england, united kingdom
Scrumconnect Consulting
Experience Active SC Clearance (mandatory at application stage). Proven expertise in: Azure Data Factory & Azure Synapse Azure DevOps & Microsoft Azure ecosystem Power BI (including semantic models) Python (incl. PySpark) and advanced SQL dbt with SQL DBs (data transformation & modelling) Dimension data modelling Terraform for infrastructure-as-code deployments Strong experience with both structured and unstructured data. Delivery track More ❯
Newbury, Berkshire, England, United Kingdom Hybrid / WFH Options
Intuita
including Azure DevOps or GitHub Considerable experience designing and building operationally efficient pipelines, utilising core Cloud components, such as Azure Data Factory, Big Query, AirFlow, Google Cloud Composer and Pyspark etc Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use Strong understanding and or use of More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
collaboration, learning, and innovation What we're looking for Hands-on experience with the Azure Data Engineering stack (ADF, Databricks, Synapse, Data Lake) Strong skills in SQL and Python (PySpark experience is a bonus) Experience building and optimising ETL/ELT pipelines A background in Financial Services is a plus, but not essential A curious mindset and the ability More ❯
in a business cultivating a robust in-house team, this role could be the next step for you. Data Engineer Responsibilities: Design, build, and optimise data pipelines in Python, PySpark, SparkSQL, and Databricks. Ingest, transform, and enrich structured, semi structured, and unstructured data. Operate and support production grade data systems with strong observability and monitoring. Enable real time and More ❯
and machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, Delta Lake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
and machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, Delta Lake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
and machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, Delta Lake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
london (city of london), south east england, united kingdom
Miryco Consultants Ltd
and machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, Delta Lake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
and machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, Delta Lake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
Atherstone, Warwickshire, West Midlands, United Kingdom Hybrid / WFH Options
Aldi Stores
end-to-end ownership of demand delivery Provide technical guidance for team members Providing 2nd or 3rd level technical support About You Experience using SQL, SQL Server DB, Python & PySpark Experience using Azure Data Factory Experience using Data Bricks and Cloudsmith Data Warehousing Experience Project Management Experience The ability to interact with the operational business and other departments, translating More ❯
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
City of London, London, England, United Kingdom Hybrid / WFH Options
Ada Meher
London offices 1-2 days a week – based business need. To Be Considered: Demonstrable expertise and experience working on large-scale Data Engineering projects Strong experience in Python/PySpark, Databricks & Apache Spark Hands on experience with both batch & streaming pipelines Strong experience in AWS and associated tooling (Eg, S3, Glue, Redshift, Lambda, Terraform etc) Experience designing Data Engineering More ❯
Winchester, Hampshire, England, United Kingdom Hybrid / WFH Options
Ada Meher
London offices 1-2 days a week – based business need. To Be Considered: Demonstrable expertise and experience working on large-scale Data Engineering projects Strong experience in Python/PySpark, Databricks & Apache Spark Hands on experience with both batch & streaming pipelines Strong experience in AWS and associated tooling (Eg, S3, Glue, Redshift, Lambda, Terraform etc) Experience designing Data Engineering More ❯
Enforce GDPR-compliant governance and security * Optimize performance and cost of data workflows * Collaborate with teams to deliver clean, structured data Key Skills Required: * Azure data services, Python/PySpark/SQL, data modelling * Power BI (preferred), legal system familiarity (bonus) * Strong grasp of UK data regulations Certifications: * Microsoft certifications (e.g., Azure Data Engineer Associate, Azure Solutions Architect) desirable More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
DXC Technology
and addressing data science opportunities. Required Skills & Experience Proven experience in MLOps or DevOps roles within machine learning environments Strong programming skills in Python, with hands-on experience in PySpark and SQL Deep understanding of ML lifecycle management and CI/CD best practices Familiarity with cloud-native ML platforms and scalable deployment strategies Excellent problem-solving skills and More ❯
Essential Skills Include: Proven leadership and mentoring experience in senior data engineering roles Expertise in Azure Data Factory, Azure Databricks, and lakehouse architecture Strong programming skills (Python, T-SQL, PySpark) and test-driven development Deep understanding of data security, compliance, and tools like Microsoft Purview Excellent communication and stakeholder management skills Experience with containerisation and orchestration (e.g., Kubernetes, Azure More ❯
and development plan beyond generic certifications. Provide a Rough Order of Magnitude (ROM) cost for implementing the proposed roadmap. Essential: Deep expertise in the Databricks Lakehouse Platform, including Python, PySpark, and advanced SQL. Strong practical knowledge of Microsoft Fabric. Proven experience in senior, client-facing roles with a consultancy mindset. Background in technical coaching, mentorship, or skills assessment. Excellent More ❯