warrington, cheshire, north west england, united kingdom Hybrid / WFH Options
Gerrard White
Knowledge of the technical differences between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar software is preferred Proficient at communicating results in a concise More ❯
bolton, greater manchester, north west england, united kingdom Hybrid / WFH Options
Gerrard White
Knowledge of the technical differences between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar software is preferred Proficient at communicating results in a concise More ❯
contract in London. Job description: Role Title: AS Admin Must have skills: Primary Skills - SAS Admin, Enterprise Guide, Basic SAS coding skill Secondary Skills - Unix Scripting, Gitlab, YAML, Autosys, pyspark, Snowflake, AWS, Agile practice, SQL Candidate should have strong experience in SAS Administration and Expert SAS Coding Skills . Should have more than 6 years experience . Should have More ❯
Contract duration: 6 months (can be extended) Location: London Must have skills: Primary Skills - SAS Admin, Enterprise Guide, Basic SAS coding skill Secondary Skills - Unix Scripting, Gitlab, YAML, Autosys, pyspark, Snowflake, AWS, Agile practice, SQL Candidate should have strong experience in SAS Administration and Expert SAS Coding Skills. Should have more than 6 years experience. Should have very good More ❯
Contract duration: 6 months (can be extended) Location: London Must have skills: Primary Skills - SAS Admin, Enterprise Guide, Basic SAS coding skill Secondary Skills - Unix Scripting, Gitlab, YAML, Autosys, pyspark, Snowflake, AWS, Agile practice, SQL Candidate should have strong experience in SAS Administration and Expert SAS Coding Skills. Should have more than 6 years experience. Should have very good More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
london (city of london), south east england, united kingdom
Mastek
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Asset Resourcing Limited
Data QA Engineer – Remote-first - £55-65,000 Overview: As a Data QA Engineer, you will ensure the reliability, accuracy and performance of our client’s data solutions. Operating remotely, you will work closely with Data Engineers, Architects and Analysts More ❯
haywards heath, south east england, united kingdom
Gerrard White
predictive modelling techniques; Logistic Regression, GBMs, Elastic Net GLMs, GAMs, Decision Trees, Random Forests, Neural Nets and Clustering Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar software is preferred Proficient at communicating results in a concise More ❯
crawley, west sussex, south east england, united kingdom
Gerrard White
predictive modelling techniques; Logistic Regression, GBMs, Elastic Net GLMs, GAMs, Decision Trees, Random Forests, Neural Nets and Clustering Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar software is preferred Proficient at communicating results in a concise More ❯
for Technical Data Architect location: Central London Type : Permanent Hybrid role (2-3 days from client location) We are seeking a highly skilled TechnicalData Architect- with expertise in Databricks, PySpark, and modern data engineering practices. The ideal candidate will lead the design, development, and optimization of scalable data pipelines, while ensuring data accuracy, consistency, and performance across the enterprise … cross-functional teams. ________________________________________ Key Responsibilities Lead the design, development, and maintenance of scalable, high-performance data pipelines on Databricks. Architect and implement data ingestion, transformation, and integration workflows using PySpark, SQL, and Delta Lake. Guide the team in migrating legacy ETL processes to modern cloud-based data pipelines. Ensure data accuracy, schema consistency, row counts, and KPIs during migration … cloud platforms, and analytics. ________________________________________ Required Skills & Qualifications 10-12 years of experience in data engineering, with at least 3+ years in a technical lead role. Strong expertise in Databricks , PySpark , and Delta Lake . DBT Advanced proficiency in SQL, ETL/ELT pipelines, and data modelling. Experience with Azure Data Services (ADLS, ADF, Synapse) or other major cloud platforms More ❯
quality data models that power reporting and advanced analytics across the business. What You'll Do Build and maintain scalable data pipelines in Azure Databricks and Microsoft Fabric using PySpark and Python Support the medallion architecture (bronze, silver, gold layers) to ensure a clean separation of raw, refined, and curated data Design and implement dimensional models such as star … performance What You'll Bring 3 to 5 years of experience in data engineering, data warehousing, or analytics engineering Strong SQL and Python skills with hands-on experience in PySpark Exposure to Azure Databricks, Microsoft Fabric, or similar cloud data platforms Understanding of Delta Lake, Git, and CI/CD workflows Experience with relational data modelling and dimensional modelling More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
Data Analyst/BI Developer - Financial Services (Power BI, PySpark, Databricks) Location: London (Hybrid, 2 days per week onsite) Salary: £65,000 to £75,000 + bonus + benefits Sector: Private Wealth/Financial Services About the Role A leading Financial Services organisation is looking for a Data Analyst/BI Developer to join its Data Insight and Analytics … division. Partner with senior leadership and key stakeholders to translate requirements into high-impact analytical products. Design, build, and maintain Power BI dashboards that inform strategic business decisions. Use PySpark , Databricks or Microsoft Fabric , and relational/dimensional modelling (Kimball methodology) to structure and transform data. Promote best practices in Git , CI/CD pipelines (Azure DevOps), and data … analysis, BI development, or data engineering. Strong knowledge of relational and dimensional modelling (Kimball or similar). Proven experience with: Power BI (advanced DAX, data modelling, RLS, deployment pipelines) PySpark and Databricks or Microsoft Fabric Git and CI/CD pipelines (Azure DevOps preferred) SQL for querying and data transformation Experience with Python for data extraction and API integration. More ❯
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our … modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No … Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data More ❯
bradford, yorkshire and the humber, united kingdom
Noir
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our … modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
and real interest in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising end-to-end data pipelines using Azure Databricks, PySpark, ADF, and Delta Lake Implementing a medallion architecture - from raw to enriched to curated Working with Delta Lake and Spark for both batch and streaming data Collaborating with analysts … What they're looking for: A strong communicator - someone who can build relationships across technical and business teams Hands-on experience building pipelines in Azure using Databricks, ADF, and PySpark Strong SQL and Python skills Understanding of medallion architecture and data lakehouse concepts Bonus points if you've worked with Power BI, Azure Purview, or streaming tools You're More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Lorien
s passionate about building scalable, cloud-native data platforms. You'll be a key player in a growing team, helping to shape the future of data infrastructure using AWS, PySpark, Iceberg, and more. From designing high-performance pipelines to supporting a full-scale migration from SQL Server to AWS, this role offers the chance to work on real-time … practices Staying ahead of the curve with emerging data technologies What You'll Bring: Solid hands-on experience with AWS (Glue, Lambda, S3, Redshift, EMR) Strong Python, SQL, and PySpark skills Deep understanding of data warehousing and lakehouse concepts Problem-solving mindset with a focus on performance and scalability Excellent communication skills across technical and non-technical teams Familiarity More ❯
source of truth. Develop and optimise CI/CD pipelines in Azure DevOps to automate deployment of workspaces, Unity Catalog, networking, and security. Work with Databricks (Spark/Scala, PySpark) to support ingestion frameworks, data processing, and platform-level libraries. Implement secure connectivity (VNET injection, Private Link, firewall, DNS, RBAC). Monitor, troubleshoot, and optimise platform reliability and performance. … processes, and standards for wider engineering adoption. Must Have: Proven expertise with Microsoft Azure (networking, security, storage, compute). Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Deep experience with Terraform for Azure resource deployment and governance. Hands-on with Azure DevOps pipelines (YAML, agents, service connections). Understanding of Azure Active Directory/Entra More ❯
warrington, cheshire, north west england, united kingdom
TalkTalk
source of truth. Develop and optimise CI/CD pipelines in Azure DevOps to automate deployment of workspaces, Unity Catalog, networking, and security. Work with Databricks (Spark/Scala, PySpark) to support ingestion frameworks, data processing, and platform-level libraries. Implement secure connectivity (VNET injection, Private Link, firewall, DNS, RBAC). Monitor, troubleshoot, and optimise platform reliability and performance. … processes, and standards for wider engineering adoption. Must Have: Proven expertise with Microsoft Azure (networking, security, storage, compute). Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Deep experience with Terraform for Azure resource deployment and governance. Hands-on with Azure DevOps pipelines (YAML, agents, service connections). Understanding of Azure Active Directory/Entra More ❯
bolton, greater manchester, north west england, united kingdom
TalkTalk
source of truth. Develop and optimise CI/CD pipelines in Azure DevOps to automate deployment of workspaces, Unity Catalog, networking, and security. Work with Databricks (Spark/Scala, PySpark) to support ingestion frameworks, data processing, and platform-level libraries. Implement secure connectivity (VNET injection, Private Link, firewall, DNS, RBAC). Monitor, troubleshoot, and optimise platform reliability and performance. … processes, and standards for wider engineering adoption. Must Have: Proven expertise with Microsoft Azure (networking, security, storage, compute). Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Deep experience with Terraform for Azure resource deployment and governance. Hands-on with Azure DevOps pipelines (YAML, agents, service connections). Understanding of Azure Active Directory/Entra More ❯