contract assignment. In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this More ❯
Data Developer for an urgent contract assignment. Key Requirements: Proven background in AI and data development Strong proficiency in Python , including data-focused libraries such as Pandas, NumPy, and PySpark Hands-on experience with Apache Spark (PySpark preferred) Solid understanding of data management and processing pipelines Experience in algorithm development and graph data structures is advantageous Active SC More ❯
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
leaders, working at the intersection of cutting-edge technology and real-world impact. As part of this role, you will be responsible for: Executing complex data integration projects using PySpark and distributed technologies Designing and implementing scalable generative AI workflows using modern AI infrastructure Collaborating with cross-functional teams to ensure successful delivery and adoption Driving continuous improvement and … innovation across client engagements To be successful in this role, you will have: Experience working in data engineering or data integration Strong technical skills in Python or PySpark Exposure to generative AI platforms or interest in building AI-powered workflows Ability to work closely with clients and lead delivery in fast-paced environments Exposure to Airflow, Databricks or DBT More ❯
leaders, working at the intersection of cutting-edge technology and real-world impact. As part of this role, you will be responsible for: Executing complex data integration projects using PySpark and distributed technologies Designing and implementing scalable generative AI workflows using modern AI infrastructure Collaborating with cross-functional teams to ensure successful delivery and adoption Driving continuous improvement and … innovation across client engagements To be successful in this role, you will have: Experience working in data engineering or data integration Strong technical skills in Python or PySpark Exposure to generative AI platforms or interest in building AI-powered workflows Ability to work closely with clients and lead delivery in fast-paced environments Exposure to Airflow, Databricks or DBT More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Oliver James
Proven experience working as a principal or lead data engineer * Strong background working with large datasets, with proficiency in SQL, Python, and PySpark * Experience managing and mentoring engineers with varying levels of experience I'm currently working with a leading insurance broker who is looking to hire a Lead Azure Data Engineer on an initial 12-month fixed-term … an Azure-based data lakehouse. Key requirements: * Proven experience working as a principal or lead data engineer * Strong background working with large datasets, with proficiency in SQL, Python, and PySpark * Experience managing and mentoring engineers with varying levels of experience * Hands-on experience deploying pipelines within Azure Databricks, ideally following the Medallion Architecture framework Hybrid working: Minimum two days More ❯
Senior Applied Data Scientist (FTC until end of March 2026) London dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data driven economy. We always put the Customer First. Our More ❯