with multiple data analytics tools (e.g. Power BI) Deep understanding of data wareho using concepts, ETL/ELT pipelines and dimensional modelling Proficiency in advanced programming languages (Python/PySpark, SQL) Experience in data pipeline orchestration (e.g. Airflow, Data Factory) Familiarity with DevOps and CI/CD practices (Git, Azure DevOps etc) Ability to communicate technical concepts to both More ❯
City of London, London, United Kingdom Hybrid / WFH Options
ECS
Azure services Requirements: 10+ years in cloud data engineering, with a strong focus on building scalable data pipelines Expertise in Azure Databricks, including building and managing ETL pipelines using PySpark or Scala Solid understanding of Apache Spark, Delta Lake, and distributed data processing concepts Hands-on experience with Azure Data Lake Storage, Azure Data Factory, and Azure Synapse Analytics More ❯
data services – Databricks, ADF, ADLS, Power BI. Proficiency in SQL and data profiling for test design and validation. Hands-on experience with test automation frameworks such as Python/PySpark, Great Expectations, Pytest, or dbt tests. Practical understanding of CI/CD integration (Azure DevOps, GitHub Actions, or similar). Strong problem-solving skills and the ability to work More ❯
Kent, England, United Kingdom Hybrid / WFH Options
Searchability®
Hands-on with Azure data services (Data Lake, Synapse, SQL DB, Functions, Logic Apps) Understanding of data security, access control and governance in regulated environments Proficiency in Python or PySpark for data engineering tasks TO BE CONSIDERED... Please either apply by clicking online or emailing me directly chelsea.hackett@searchability.com . By applying to this role you give express consent More ❯
maidstone, south east england, united kingdom Hybrid / WFH Options
Searchability®
Hands-on with Azure data services (Data Lake, Synapse, SQL DB, Functions, Logic Apps) Understanding of data security, access control and governance in regulated environments Proficiency in Python or PySpark for data engineering tasks TO BE CONSIDERED... Please either apply by clicking online or emailing me directly chelsea.hackett@searchability.com . By applying to this role you give express consent More ❯
lake and Azure Monitor providing added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks such as GDPR, HIPAA, ISO More ❯
lake and Azure Monitor providing added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks such as GDPR, HIPAA, ISO More ❯
watford, hertfordshire, east anglia, united kingdom
Akkodis
lake and Azure Monitor providing added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks such as GDPR, HIPAA, ISO More ❯
Newbury, Berkshire, England, United Kingdom Hybrid / WFH Options
Intuita
including Azure DevOps or GitHub Considerable experience designing and building operationally efficient pipelines, utilising core Cloud components, such as Azure Data Factory, Big Query, AirFlow, Google Cloud Composer and Pyspark etc Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use Strong understanding and or use of More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Client Server
are an experienced Data Engineer within financial services environments You have expertise with GCP including BigQuery, Pub/Sub, Cloud Composer and IAM You have strong Python, SQL and PySpark skills You have experience with real-time data streaming using Kafka or Spark You have a good knowledge of Data Lakes, Data Warehousing, Data Modelling You're familiar with More ❯
London, England, United Kingdom Hybrid / WFH Options
Client Server
are an experienced Data Engineer within financial services environments You have expertise with GCP including BigQuery, Pub/Sub, Cloud Composer and IAM You have strong Python, SQL and PySpark skills You have experience with real-time data streaming using Kafka or Spark You have a good knowledge of Data Lakes, Data Warehousing, Data Modelling You're familiar with More ❯
london, south east england, united kingdom Hybrid / WFH Options
Client Server
are an experienced Data Engineer within financial services environments You have expertise with GCP including BigQuery, Pub/Sub, Cloud Composer and IAM You have strong Python, SQL and PySpark skills You have experience with real-time data streaming using Kafka or Spark You have a good knowledge of Data Lakes, Data Warehousing, Data Modelling You're familiar with More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Client Server
are an experienced Data Engineer within financial services environments You have expertise with GCP including BigQuery, Pub/Sub, Cloud Composer and IAM You have strong Python, SQL and PySpark skills You have experience with real-time data streaming using Kafka or Spark You have a good knowledge of Data Lakes, Data Warehousing, Data Modelling You're familiar with More ❯
Experience Active SC Clearance (mandatory at application stage). Proven expertise in: Azure Data Factory & Azure Synapse Azure DevOps & Microsoft Azure ecosystem Power BI (including semantic models) Python (incl. PySpark) and advanced SQL dbt with SQL DBs (data transformation & modelling) Dimension data modelling Terraform for infrastructure-as-code deployments Strong experience with both structured and unstructured data. Delivery track More ❯
warrington, cheshire, north west england, united kingdom
Scrumconnect Consulting
Experience Active SC Clearance (mandatory at application stage). Proven expertise in: Azure Data Factory & Azure Synapse Azure DevOps & Microsoft Azure ecosystem Power BI (including semantic models) Python (incl. PySpark) and advanced SQL dbt with SQL DBs (data transformation & modelling) Dimension data modelling Terraform for infrastructure-as-code deployments Strong experience with both structured and unstructured data. Delivery track More ❯
bolton, greater manchester, north west england, united kingdom
Scrumconnect Consulting
Experience Active SC Clearance (mandatory at application stage). Proven expertise in: Azure Data Factory & Azure Synapse Azure DevOps & Microsoft Azure ecosystem Power BI (including semantic models) Python (incl. PySpark) and advanced SQL dbt with SQL DBs (data transformation & modelling) Dimension data modelling Terraform for infrastructure-as-code deployments Strong experience with both structured and unstructured data. Delivery track More ❯
end (Data Factory, Synapse, Fabric, or Databricks). Strong SQL development and data modelling capability. Experience integrating ERP or legacy systems into cloud data platforms. Proficiency in Python or PySpark for transformation and automation. Understanding of data governance, access control, and security within Azure. Hands-on experience preparing data for Power BI or other analytics tools. Excellent communication skills More ❯
Sutton Coldfield, Birmingham, West Midlands (County), United Kingdom
SF Recruitment
end (Data Factory, Synapse, Fabric, or Databricks). Strong SQL development and data modelling capability. Experience integrating ERP or legacy systems into cloud data platforms. Proficiency in Python or PySpark for transformation and automation. Understanding of data governance, access control, and security within Azure. Hands-on experience preparing data for Power BI or other analytics tools. Excellent communication skills More ❯
Newbury, Berkshire, England, United Kingdom Hybrid / WFH Options
Intuita
including Azure DevOps or GitHub Considerable experience designing and building operationally efficient pipelines, utilising core Cloud components, such as Azure Data Factory, Big Query, AirFlow, Google Cloud Composer and Pyspark etc Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use Strong understanding and or use of More ❯
Key Skills: Strong SQL skills and experience with relational databases. Hands-on experience with Azure (ADF, Synapse, Data Lake) or AWS/GCP equivalents. Familiarity with scripting languages (Python, PySpark). Knowledge of data modelling and warehouse design (Kimball, Data Vault). Exposure to Power BI to support optimised data models for reporting. Agile team experience, CI/CD More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
collaboration, learning, and innovation What we're looking for Hands-on experience with the Azure Data Engineering stack (ADF, Databricks, Synapse, Data Lake) Strong skills in SQL and Python (PySpark experience is a bonus) Experience building and optimising ETL/ELT pipelines A background in Financial Services is a plus, but not essential A curious mindset and the ability More ❯
Learning, Deep Learning or LLM Frameworks) Desirable Minimum 2 years experience in Data related field Minimum 2 years in a Business or Management Consulting field Experience of Docker, Hadoop, PySpark, Apache or MS Azure Minimum 2 years NHS/Healthcare experience Disclosure and Barring Service Check This post is subject to the Rehabilitation of Offenders Act (Exceptions Order More ❯
part of our Data Engineering Team. You will not only maintain and optimise our data infrastructure but also spearhead its evolution. Built predominantly on AWS, and utilising technologies like Pyspark and Iceberg, our infrastructure is designed for scalability, robustness, and efficiency. You will be part of developing sophisticated data integrations with various platforms, developing real-time data solutions, improving More ❯
while establishing and maintaining best practices for data governance, security, and reliability. What we’re looking for: 5+ years’ experience in data engineering and cloud infrastructure Expertise in Python (PySpark), SQL, and dbt (or similar) Strong DevOps skills: Terraform, Docker, CI/CD, monitoring/logging tools AWS experience: S3, ECS/Fargate, RDS, Lambda Data warehousing experience (PostgreSQL More ❯
while establishing and maintaining best practices for data governance, security, and reliability. What we’re looking for: 5+ years’ experience in data engineering and cloud infrastructure Expertise in Python (PySpark), SQL, and dbt (or similar) Strong DevOps skills: Terraform, Docker, CI/CD, monitoring/logging tools AWS experience: S3, ECS/Fargate, RDS, Lambda Data warehousing experience (PostgreSQL More ❯