London, South East, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
Strong expertise in Power BI – dashboarding, reporting, and data visualisation Advanced SQL skills for querying and data manipulation Experience with Databricks for scalable data processing Desirable Skills Familiarity with PySpark for distributed data processing More ❯
quality data models that power reporting and advanced analytics across the business. What You'll Do Build and maintain scalable data pipelines in Azure Databricks and Microsoft Fabric using PySpark and Python Support the medallion architecture (bronze, silver, gold layers) to ensure a clean separation of raw, refined, and curated data Design and implement dimensional models such as star … performance What You'll Bring 3 to 5 years of experience in data engineering, data warehousing, or analytics engineering Strong SQL and Python skills with hands-on experience in PySpark Exposure to Azure Databricks, Microsoft Fabric, or similar cloud data platforms Understanding of Delta Lake, Git, and CI/CD workflows Experience with relational data modelling and dimensional modelling More ❯
scale while contributing to the delivery of data directly to customers and systems across the organisation. Key Responsibilities: Contributing to the development and maintenance of data pipelines using Python, PySpark, and Databricks Supporting the delivery of enriched datasets to customers via Databricks, RESTful APIs, and event-driven delivery mechanisms (e.g., Kafka or similar) Assisting in data ingestion, transformation, and … development Continuously developing technical skills and understanding of the business domain Requirements: Hands-on experience in a software/data engineering role. Proficiency in Python and working knowledge of PySpark or similar distributed data frameworks. Familiarity with Databricks or a strong interest in learning and working with the platform. Understanding of data delivery patterns, including REST APIs and event More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
Data Analyst/BI Developer - Financial Services (Power BI, PySpark, Databricks) Location: London (Hybrid, 2 days per week onsite) Salary: £65,000 to £75,000 + bonus + benefits Sector: Private Wealth/Financial Services About the Role A leading Financial Services organisation is looking for a Data Analyst/BI Developer to join its Data Insight and Analytics … division. Partner with senior leadership and key stakeholders to translate requirements into high-impact analytical products. Design, build, and maintain Power BI dashboards that inform strategic business decisions. Use PySpark , Databricks or Microsoft Fabric , and relational/dimensional modelling (Kimball methodology) to structure and transform data. Promote best practices in Git , CI/CD pipelines (Azure DevOps), and data … analysis, BI development, or data engineering. Strong knowledge of relational and dimensional modelling (Kimball or similar). Proven experience with: Power BI (advanced DAX, data modelling, RLS, deployment pipelines) PySpark and Databricks or Microsoft Fabric Git and CI/CD pipelines (Azure DevOps preferred) SQL for querying and data transformation Experience with Python for data extraction and API integration. More ❯
Job Description Role/Job Title: Developer (PySpark + Fabric) Work Location: London (Office Based) The Role The role will be integral to realizing the customer's vision and strategy in transforming some of their critical application and data engineering components. As a global financial markets infrastructure and data provider , the customer leverages cutting-edge technologies to support its More ❯
and real interest in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising end-to-end data pipelines using Azure Databricks, PySpark, ADF, and Delta Lake Implementing a medallion architecture - from raw to enriched to curated Working with Delta Lake and Spark for both batch and streaming data Collaborating with analysts … What they're looking for: A strong communicator - someone who can build relationships across technical and business teams Hands-on experience building pipelines in Azure using Databricks, ADF, and PySpark Strong SQL and Python skills Understanding of medallion architecture and data lakehouse concepts Bonus points if you've worked with Power BI, Azure Purview, or streaming tools You're More ❯
Central London, London, United Kingdom Hybrid / WFH Options
iDPP
someone who enjoys building scalable data solutions while staying close to business impact. The Role As a Data Analytics Engineer , youll design, build, and maintain reliable data pipelinesprimarily using PySpark, SQL, and Python to ensure business teams (analysts, product managers, finance, operations) have access to well-modeled, actionable data. Youll work closely with stakeholders to translate business needs into … spend more time coding, managing data infrastructure, and ensuring pipeline reliability. Who Were Looking For Data Analytics : Analysts who have strong experience building and maintaining data pipelines (particularly in PySpark/SQL ) and want to work on production-grade infrastructure. Data Engineering : Engineers who want to work more closely with business stakeholders and enable analytics-ready data solutions. Analytics … Professionals who already operate in this hybrid space, with proven expertise across big data environments, data modeling, and business-facing delivery. Key Skills & Experience Strong hands-on experience with PySpark, SQL, and Python Proven track record of building and maintaining data pipelines Ability to translate business requirements into robust data models and solutions Experience with data validation, quality checks More ❯
Leeds, West Yorkshire, England, United Kingdom Hybrid / WFH Options
Hays Specialist Recruitment Limited
Your new company This is a pivotal opportunity to join the Data and Innovation division of a large complex organisation leading the delivery of SAM (Supervisory Analytics and Metrics)-a transformative programme enhancing supervisory decision-making through advanced data and More ❯
Lead Software Engineer - Databricks, PySpark, AWS London, United Kingdom Posted about 2 months ago This is a job posted by our partner Jooble Qualifications Strong experience with Redshift Proficiency in SQL based database systems, both SQL & NoSQL, as well as programming languages such as Python, Java, or Scala Preferred Qualifications Experience with data architecture, data modeling, data warehousing, and More ❯
source of truth. Develop and optimise CI/CD pipelines in Azure DevOps to automate deployment of workspaces, Unity Catalog, networking, and security. Work with Databricks (Spark/Scala, PySpark) to support ingestion frameworks, data processing, and platform-level libraries. Implement secure connectivity (VNET injection, Private Link, firewall, DNS, RBAC). Monitor, troubleshoot, and optimise platform reliability and performance. … processes, and standards for wider engineering adoption. Must Have: Proven expertise with Microsoft Azure (networking, security, storage, compute). Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Deep experience with Terraform for Azure resource deployment and governance. Hands-on with Azure DevOps pipelines (YAML, agents, service connections). Understanding of Azure Active Directory/Entra More ❯
warrington, cheshire, north west england, united kingdom
TalkTalk
source of truth. Develop and optimise CI/CD pipelines in Azure DevOps to automate deployment of workspaces, Unity Catalog, networking, and security. Work with Databricks (Spark/Scala, PySpark) to support ingestion frameworks, data processing, and platform-level libraries. Implement secure connectivity (VNET injection, Private Link, firewall, DNS, RBAC). Monitor, troubleshoot, and optimise platform reliability and performance. … processes, and standards for wider engineering adoption. Must Have: Proven expertise with Microsoft Azure (networking, security, storage, compute). Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Deep experience with Terraform for Azure resource deployment and governance. Hands-on with Azure DevOps pipelines (YAML, agents, service connections). Understanding of Azure Active Directory/Entra More ❯
bolton, greater manchester, north west england, united kingdom
TalkTalk
source of truth. Develop and optimise CI/CD pipelines in Azure DevOps to automate deployment of workspaces, Unity Catalog, networking, and security. Work with Databricks (Spark/Scala, PySpark) to support ingestion frameworks, data processing, and platform-level libraries. Implement secure connectivity (VNET injection, Private Link, firewall, DNS, RBAC). Monitor, troubleshoot, and optimise platform reliability and performance. … processes, and standards for wider engineering adoption. Must Have: Proven expertise with Microsoft Azure (networking, security, storage, compute). Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Deep experience with Terraform for Azure resource deployment and governance. Hands-on with Azure DevOps pipelines (YAML, agents, service connections). Understanding of Azure Active Directory/Entra More ❯
Role: Developer (PySpark+ Fabric) Location: London Contract (6months +) Hybrid (Inside IR35) The Role The role will be integral to realizing the customer's vision and strategy in transforming some of their critical application and data engineering components. As a More ❯
Manager - Data Engineering Would you like to ensure the successful delivery of the Data Platform and Software Innovations? Do you enjoy creating a collaborative and customer-focused working environment? About our team: LexisNexis Intellectual Property, which serves customers in more More ❯
contract assignment. In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this More ❯
Data Developer for an urgent contract assignment. Key Requirements: Proven background in AI and data development Strong proficiency in Python , including data-focused libraries such as Pandas, NumPy, and PySpark Hands-on experience with Apache Spark (PySpark preferred) Solid understanding of data management and processing pipelines Experience in algorithm development and graph data structures is advantageous Active SC More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Oliver James
Proven experience working as a principal or lead data engineer * Strong background working with large datasets, with proficiency in SQL, Python, and PySpark * Experience managing and mentoring engineers with varying levels of experience I'm currently working with a leading insurance broker who is looking to hire a Lead Azure Data Engineer on an initial 12-month fixed-term … an Azure-based data lakehouse. Key requirements: * Proven experience working as a principal or lead data engineer * Strong background working with large datasets, with proficiency in SQL, Python, and PySpark * Experience managing and mentoring engineers with varying levels of experience * Hands-on experience deploying pipelines within Azure Databricks, ideally following the Medallion Architecture framework Hybrid working: Minimum two days More ❯
Senior Applied Data Scientist (FTC until end of March 2026) London dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data driven economy. We always put the Customer First. Our More ❯
Job Description Role/Job Title: Developer (PySpark + Fabric) Work Location: London (Office Based) The Role The role will be integral to realizing the customer's vision and strategy in transforming some of their critical application and data engineering components. As a global financial markets infrastructure and data provider , the customer leverages cutting-edge technologies to support its More ❯