london, south east england, united kingdom Hybrid / WFH Options
myGwork - LGBTQ+ Business Community
context. Proven success in leading analysts, including development, prioritisation, and delivery. Deep comfort with tools like SQL, Tableau, and large-scale data platforms (e.g., Databricks); bonus for Python or PySpark skills. Strong grasp of A/B testing, experimentation design, and statistical rigour. Exceptional communicator - able to distil complex data into clear, actionable narratives for senior audiences. Strategic thinker More ❯
Contract duration: 6 months (can be extended) Location: London Must have skills: Primary Skills - SAS Admin, Enterprise Guide, Basic SAS coding skill Secondary Skills - Unix Scripting, Gitlab, YAML, Autosys, pyspark, Snowflake, AWS, Agile practice, SQL Candidate should have strong experience in SAS Administration and Expert SAS Coding Skills. Should have more than 6 years experience. Should have very good More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
london (city of london), south east england, united kingdom
Mastek
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Asset Resourcing Limited
Data QA Engineer – Remote-first - £55-65,000 Overview: As a Data QA Engineer, you will ensure the reliability, accuracy and performance of our client’s data solutions. Operating remotely, you will work closely with Data Engineers, Architects and Analysts More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
Strong expertise in Power BI – dashboarding, reporting, and data visualisation Advanced SQL skills for querying and data manipulation Experience with Databricks for scalable data processing Desirable Skills Familiarity with PySpark for distributed data processing More ❯
quality data models that power reporting and advanced analytics across the business. What You'll Do Build and maintain scalable data pipelines in Azure Databricks and Microsoft Fabric using PySpark and Python Support the medallion architecture (bronze, silver, gold layers) to ensure a clean separation of raw, refined, and curated data Design and implement dimensional models such as star … performance What You'll Bring 3 to 5 years of experience in data engineering, data warehousing, or analytics engineering Strong SQL and Python skills with hands-on experience in PySpark Exposure to Azure Databricks, Microsoft Fabric, or similar cloud data platforms Understanding of Delta Lake, Git, and CI/CD workflows Experience with relational data modelling and dimensional modelling More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
Data Analyst/BI Developer - Financial Services (Power BI, PySpark, Databricks) Location: London (Hybrid, 2 days per week onsite) Salary: £65,000 to £75,000 + bonus + benefits Sector: Private Wealth/Financial Services About the Role A leading Financial Services organisation is looking for a Data Analyst/BI Developer to join its Data Insight and Analytics … division. Partner with senior leadership and key stakeholders to translate requirements into high-impact analytical products. Design, build, and maintain Power BI dashboards that inform strategic business decisions. Use PySpark , Databricks or Microsoft Fabric , and relational/dimensional modelling (Kimball methodology) to structure and transform data. Promote best practices in Git , CI/CD pipelines (Azure DevOps), and data … analysis, BI development, or data engineering. Strong knowledge of relational and dimensional modelling (Kimball or similar). Proven experience with: Power BI (advanced DAX, data modelling, RLS, deployment pipelines) PySpark and Databricks or Microsoft Fabric Git and CI/CD pipelines (Azure DevOps preferred) SQL for querying and data transformation Experience with Python for data extraction and API integration. More ❯
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our … modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No … Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data More ❯
and real interest in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising end-to-end data pipelines using Azure Databricks, PySpark, ADF, and Delta Lake Implementing a medallion architecture - from raw to enriched to curated Working with Delta Lake and Spark for both batch and streaming data Collaborating with analysts … What they're looking for: A strong communicator - someone who can build relationships across technical and business teams Hands-on experience building pipelines in Azure using Databricks, ADF, and PySpark Strong SQL and Python skills Understanding of medallion architecture and data lakehouse concepts Bonus points if you've worked with Power BI, Azure Purview, or streaming tools You're More ❯
Leeds, West Yorkshire, England, United Kingdom Hybrid / WFH Options
Hays Specialist Recruitment Limited
Your new company This is a pivotal opportunity to join the Data and Innovation division of a large complex organisation leading the delivery of SAM (Supervisory Analytics and Metrics)-a transformative programme enhancing supervisory decision-making through advanced data and More ❯
source of truth. Develop and optimise CI/CD pipelines in Azure DevOps to automate deployment of workspaces, Unity Catalog, networking, and security. Work with Databricks (Spark/Scala, PySpark) to support ingestion frameworks, data processing, and platform-level libraries. Implement secure connectivity (VNET injection, Private Link, firewall, DNS, RBAC). Monitor, troubleshoot, and optimise platform reliability and performance. … processes, and standards for wider engineering adoption. Must Have: Proven expertise with Microsoft Azure (networking, security, storage, compute). Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Deep experience with Terraform for Azure resource deployment and governance. Hands-on with Azure DevOps pipelines (YAML, agents, service connections). Understanding of Azure Active Directory/Entra More ❯
warrington, cheshire, north west england, united kingdom
TalkTalk
source of truth. Develop and optimise CI/CD pipelines in Azure DevOps to automate deployment of workspaces, Unity Catalog, networking, and security. Work with Databricks (Spark/Scala, PySpark) to support ingestion frameworks, data processing, and platform-level libraries. Implement secure connectivity (VNET injection, Private Link, firewall, DNS, RBAC). Monitor, troubleshoot, and optimise platform reliability and performance. … processes, and standards for wider engineering adoption. Must Have: Proven expertise with Microsoft Azure (networking, security, storage, compute). Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Deep experience with Terraform for Azure resource deployment and governance. Hands-on with Azure DevOps pipelines (YAML, agents, service connections). Understanding of Azure Active Directory/Entra More ❯
bolton, greater manchester, north west england, united kingdom
TalkTalk
source of truth. Develop and optimise CI/CD pipelines in Azure DevOps to automate deployment of workspaces, Unity Catalog, networking, and security. Work with Databricks (Spark/Scala, PySpark) to support ingestion frameworks, data processing, and platform-level libraries. Implement secure connectivity (VNET injection, Private Link, firewall, DNS, RBAC). Monitor, troubleshoot, and optimise platform reliability and performance. … processes, and standards for wider engineering adoption. Must Have: Proven expertise with Microsoft Azure (networking, security, storage, compute). Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Deep experience with Terraform for Azure resource deployment and governance. Hands-on with Azure DevOps pipelines (YAML, agents, service connections). Understanding of Azure Active Directory/Entra More ❯
contract assignment. In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Oliver James
Proven experience working as a principal or lead data engineer * Strong background working with large datasets, with proficiency in SQL, Python, and PySpark * Experience managing and mentoring engineers with varying levels of experience I'm currently working with a leading insurance broker who is looking to hire a Lead Azure Data Engineer on an initial 12-month fixed-term … an Azure-based data lakehouse. Key requirements: * Proven experience working as a principal or lead data engineer * Strong background working with large datasets, with proficiency in SQL, Python, and PySpark * Experience managing and mentoring engineers with varying levels of experience * Hands-on experience deploying pipelines within Azure Databricks, ideally following the Medallion Architecture framework Hybrid working: Minimum two days More ❯