for Technical Data Architect location: Central London Type : Permanent Hybrid role (2-3 days from client location) We are seeking a highly skilled TechnicalData Architect- with expertise in Databricks, PySpark, and modern data engineering practices. The ideal candidate will lead the design, development, and optimization of scalable data pipelines, while ensuring data accuracy, consistency, and performance across the enterprise … cross-functional teams. ________________________________________ Key Responsibilities Lead the design, development, and maintenance of scalable, high-performance data pipelines on Databricks. Architect and implement data ingestion, transformation, and integration workflows using PySpark, SQL, and Delta Lake. Guide the team in migrating legacy ETL processes to modern cloud-based data pipelines. Ensure data accuracy, schema consistency, row counts, and KPIs during migration … cloud platforms, and analytics. ________________________________________ Required Skills & Qualifications 10-12 years of experience in data engineering, with at least 3+ years in a technical lead role. Strong expertise in Databricks , PySpark , and Delta Lake . DBT Advanced proficiency in SQL, ETL/ELT pipelines, and data modelling. Experience with Azure Data Services (ADLS, ADF, Synapse) or other major cloud platforms More ❯
quality data models that power reporting and advanced analytics across the business. What You'll Do Build and maintain scalable data pipelines in Azure Databricks and Microsoft Fabric using PySpark and Python Support the medallion architecture (bronze, silver, gold layers) to ensure a clean separation of raw, refined, and curated data Design and implement dimensional models such as star … performance What You'll Bring 3 to 5 years of experience in data engineering, data warehousing, or analytics engineering Strong SQL and Python skills with hands-on experience in PySpark Exposure to Azure Databricks, Microsoft Fabric, or similar cloud data platforms Understanding of Delta Lake, Git, and CI/CD workflows Experience with relational data modelling and dimensional modelling More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
Data Analyst/BI Developer - Financial Services (Power BI, PySpark, Databricks) Location: London (Hybrid, 2 days per week onsite) Salary: £65,000 to £75,000 + bonus + benefits Sector: Private Wealth/Financial Services About the Role A leading Financial Services organisation is looking for a Data Analyst/BI Developer to join its Data Insight and Analytics … division. Partner with senior leadership and key stakeholders to translate requirements into high-impact analytical products. Design, build, and maintain Power BI dashboards that inform strategic business decisions. Use PySpark , Databricks or Microsoft Fabric , and relational/dimensional modelling (Kimball methodology) to structure and transform data. Promote best practices in Git , CI/CD pipelines (Azure DevOps), and data … analysis, BI development, or data engineering. Strong knowledge of relational and dimensional modelling (Kimball or similar). Proven experience with: Power BI (advanced DAX, data modelling, RLS, deployment pipelines) PySpark and Databricks or Microsoft Fabric Git and CI/CD pipelines (Azure DevOps preferred) SQL for querying and data transformation Experience with Python for data extraction and API integration. More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
and real interest in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising end-to-end data pipelines using Azure Databricks, PySpark, ADF, and Delta Lake Implementing a medallion architecture - from raw to enriched to curated Working with Delta Lake and Spark for both batch and streaming data Collaborating with analysts … What they're looking for: A strong communicator - someone who can build relationships across technical and business teams Hands-on experience building pipelines in Azure using Databricks, ADF, and PySpark Strong SQL and Python skills Understanding of medallion architecture and data lakehouse concepts Bonus points if you've worked with Power BI, Azure Purview, or streaming tools You're More ❯
Data Developer for an urgent contract assignment. Key Requirements: Proven background in AI and data development Strong proficiency in Python , including data-focused libraries such as Pandas, NumPy, and PySpark Hands-on experience with Apache Spark (PySpark preferred) Solid understanding of data management and processing pipelines Experience in algorithm development and graph data structures is advantageous Active SC More ❯
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
leaders, working at the intersection of cutting-edge technology and real-world impact. As part of this role, you will be responsible for: Executing complex data integration projects using PySpark and distributed technologies Designing and implementing scalable generative AI workflows using modern AI infrastructure Collaborating with cross-functional teams to ensure successful delivery and adoption Driving continuous improvement and … innovation across client engagements To be successful in this role, you will have: Experience working in data engineering or data integration Strong technical skills in Python or PySpark Exposure to generative AI platforms or interest in building AI-powered workflows Ability to work closely with clients and lead delivery in fast-paced environments Exposure to Airflow, Databricks or DBT More ❯
leaders, working at the intersection of cutting-edge technology and real-world impact. As part of this role, you will be responsible for: Executing complex data integration projects using PySpark and distributed technologies Designing and implementing scalable generative AI workflows using modern AI infrastructure Collaborating with cross-functional teams to ensure successful delivery and adoption Driving continuous improvement and … innovation across client engagements To be successful in this role, you will have: Experience working in data engineering or data integration Strong technical skills in Python or PySpark Exposure to generative AI platforms or interest in building AI-powered workflows Ability to work closely with clients and lead delivery in fast-paced environments Exposure to Airflow, Databricks or DBT More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Oliver James
Proven experience working as a principal or lead data engineer * Strong background working with large datasets, with proficiency in SQL, Python, and PySpark * Experience managing and mentoring engineers with varying levels of experience I'm currently working with a leading insurance broker who is looking to hire a Lead Azure Data Engineer on an initial 12-month fixed-term … an Azure-based data lakehouse. Key requirements: * Proven experience working as a principal or lead data engineer * Strong background working with large datasets, with proficiency in SQL, Python, and PySpark * Experience managing and mentoring engineers with varying levels of experience * Hands-on experience deploying pipelines within Azure Databricks, ideally following the Medallion Architecture framework Hybrid working: Minimum two days More ❯