data services – Databricks, ADF, ADLS, Power BI. Proficiency in SQL and data profiling for test design and validation. Hands-on experience with test automation frameworks such as Python/PySpark, Great Expectations, Pytest, or dbt tests. Practical understanding of CI/CD integration (Azure DevOps, GitHub Actions, or similar). Strong problem-solving skills and the ability to work More ❯
Newbury, Berkshire, England, United Kingdom Hybrid / WFH Options
Intuita
including Azure DevOps or GitHub Considerable experience designing and building operationally efficient pipelines, utilising core Cloud components, such as Azure Data Factory, Big Query, AirFlow, Google Cloud Composer and Pyspark etc Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use Strong understanding and or use of More ❯
ad-hoc and complex queries to perform data analysis. Databricks experience is essential. Experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will be able to develop solutions in a hybrid data environment (on-Prem and Cloud). You must be able to collaborate seamlessly across diverse technical stacks More ❯
ad-hoc and complex queries to perform data analysis. Databricks experience is essential. Experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will be able to develop solutions in a hybrid data environment (on-Prem and Cloud). You must be able to collaborate seamlessly across diverse technical stacks More ❯
Description: Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
Description: Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
Experience Active SC Clearance (mandatory at application stage). Proven expertise in: Azure Data Factory & Azure Synapse Azure DevOps & Microsoft Azure ecosystem Power BI (including semantic models) Python (incl. PySpark) and advanced SQL dbt with SQL DBs (data transformation & modelling) Dimension data modelling Terraform for infrastructure-as-code deployments Strong experience with both structured and unstructured data. Delivery track More ❯
warrington, cheshire, north west england, united kingdom
Scrumconnect Consulting
Experience Active SC Clearance (mandatory at application stage). Proven expertise in: Azure Data Factory & Azure Synapse Azure DevOps & Microsoft Azure ecosystem Power BI (including semantic models) Python (incl. PySpark) and advanced SQL dbt with SQL DBs (data transformation & modelling) Dimension data modelling Terraform for infrastructure-as-code deployments Strong experience with both structured and unstructured data. Delivery track More ❯
bolton, greater manchester, north west england, united kingdom
Scrumconnect Consulting
Experience Active SC Clearance (mandatory at application stage). Proven expertise in: Azure Data Factory & Azure Synapse Azure DevOps & Microsoft Azure ecosystem Power BI (including semantic models) Python (incl. PySpark) and advanced SQL dbt with SQL DBs (data transformation & modelling) Dimension data modelling Terraform for infrastructure-as-code deployments Strong experience with both structured and unstructured data. Delivery track More ❯
Birmingham, West Midlands, England, United Kingdom
SF Recruitment
end (Data Factory, Synapse, Fabric, or Databricks). Strong SQL development and data modelling capability. Experience integrating ERP or legacy systems into cloud data platforms. Proficiency in Python or PySpark for transformation and automation. Understanding of data governance, access control, and security within Azure. Hands-on experience preparing data for Power BI or other analytics tools. Excellent communication skills More ❯
Sutton Coldfield, Birmingham, West Midlands (County), United Kingdom
SF Recruitment
end (Data Factory, Synapse, Fabric, or Databricks). Strong SQL development and data modelling capability. Experience integrating ERP or legacy systems into cloud data platforms. Proficiency in Python or PySpark for transformation and automation. Understanding of data governance, access control, and security within Azure. Hands-on experience preparing data for Power BI or other analytics tools. Excellent communication skills More ❯
Newbury, Berkshire, England, United Kingdom Hybrid / WFH Options
Intuita
including Azure DevOps or GitHub Considerable experience designing and building operationally efficient pipelines, utilising core Cloud components, such as Azure Data Factory, Big Query, AirFlow, Google Cloud Composer and Pyspark etc Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use Strong understanding and or use of More ❯
DevOps best practices. Collaborate with BAs on source-to-target mapping and build new data model components. Participate in Agile ceremonies (stand-ups, backlog refinement, etc.). Essential Skills: PySpark and SparkSQL. Strong knowledge of relational database modelling Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes). Azure platform experience. ADF or Synapse pipelines for orchestration. PythonMore ❯
DevOps best practices. Collaborate with BAs on source-to-target mapping and build new data model components. Participate in Agile ceremonies (stand-ups, backlog refinement, etc.). Essential Skills: PySpark and SparkSQL. Strong knowledge of relational database modelling Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes). Azure platform experience. ADF or Synapse pipelines for orchestration. PythonMore ❯
Key Skills: Strong SQL skills and experience with relational databases. Hands-on experience with Azure (ADF, Synapse, Data Lake) or AWS/GCP equivalents. Familiarity with scripting languages (Python, PySpark). Knowledge of data modelling and warehouse design (Kimball, Data Vault). Exposure to Power BI to support optimised data models for reporting. Agile team experience, CI/CD More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
collaboration, learning, and innovation What we're looking for Hands-on experience with the Azure Data Engineering stack (ADF, Databricks, Synapse, Data Lake) Strong skills in SQL and Python (PySpark experience is a bonus) Experience building and optimising ETL/ELT pipelines A background in Financial Services is a plus, but not essential A curious mindset and the ability More ❯
Learning, Deep Learning or LLM Frameworks) Desirable Minimum 2 years experience in Data related field Minimum 2 years in a Business or Management Consulting field Experience of Docker, Hadoop, PySpark, Apache or MS Azure Minimum 2 years NHS/Healthcare experience Disclosure and Barring Service Check This post is subject to the Rehabilitation of Offenders Act (Exceptions Order More ❯
in a business cultivating a robust in-house team, this role could be the next step for you. Data Engineer Responsibilities: Design, build, and optimise data pipelines in Python, PySpark, SparkSQL, and Databricks. Ingest, transform, and enrich structured, semi structured, and unstructured data. Operate and support production grade data systems with strong observability and monitoring. Enable real time and More ❯
and machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, Delta Lake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
london (city of london), south east england, united kingdom
Miryco Consultants Ltd
and machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, Delta Lake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
and machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, Delta Lake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
is optimized. YOUR BACKGROUND AND EXPERIENCE 5 years of commercial experience working as a Data Engineer 3 years exposure to the Azure Stack - Data bricks, Synapse, ADF Python and PySpark Airflow for Orchestration Test-Driven Development and Automated Testing ETL Development More ❯
Atherstone, Warwickshire, West Midlands, United Kingdom Hybrid / WFH Options
Aldi Stores
end-to-end ownership of demand delivery Provide technical guidance for team members Providing 2nd or 3rd level technical support About You Experience using SQL, SQL Server DB, Python & PySpark Experience using Azure Data Factory Experience using Data Bricks and Cloudsmith Data Warehousing Experience Project Management Experience The ability to interact with the operational business and other departments, translating More ❯
Winchester, Hampshire, England, United Kingdom Hybrid / WFH Options
Ada Meher
London offices 1-2 days a week – based business need. To Be Considered: Demonstrable expertise and experience working on large-scale Data Engineering projects Strong experience in Python/PySpark, Databricks & Apache Spark Hands on experience with both batch & streaming pipelines Strong experience in AWS and associated tooling (Eg, S3, Glue, Redshift, Lambda, Terraform etc) Experience designing Data Engineering More ❯
Knutsford, Cheshire, United Kingdom Hybrid / WFH Options
Experis
front-end development (HTML, Stream-lit, Flask Familiarity with model deployment and monitoring in cloud environments (AWS). Understanding of machine learning lifecycle and data pipelines. Proficiency with Python, Pyspark, Big-data ecosystems Hands-on experience with MLOps tools (e.g., MLflow, Airflow, Docker, Kubernetes) Secondary Skills Experience with RESTful APIs and integrating backend services All profiles will be reviewed More ❯