data retention and archival strategies in cloud environments. Strong understanding and practical implementation of Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing. Advanced programming skills in Python, PySpark, and SQL, with the ability to build modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake More ❯
london (city of london), south east england, united kingdom
HCLTech
data retention and archival strategies in cloud environments. Strong understanding and practical implementation of Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing. Advanced programming skills in Python, PySpark, and SQL, with the ability to build modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake More ❯
engagement. * Drive innovation through advanced analytics and research-based problem solving. To be successful you should have: 10 years hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Previous experience in implementing best practices for data engineering, including data governance, data quality, and data security. Proficiency More ❯
complex data sets. Collaborate with data scientists to deploy machine learning models. Contribute to strategy, planning, and continuous improvement. Required Experience: Hands-on experience with AWS data tools: Glue, PySpark, Athena, Iceberg, Lake Formation. Strong Python and SQL skills for data processing and analysis. Deep understanding of data governance, quality, and security. Knowledge of market data and its business More ❯
end product lifecycle management. What We’re Looking For 5+ years in data engineering/platform roles , with 2+ years running Databricks at scale . Expert in Apache Spark, PySpark, Delta Lake, Unity Catalog, MLflow . Strong Python engineering background with modern CI/CD pipelines. Proven experience optimising Databricks for performance and cost. Track record of leading teams More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Careerwise
end product lifecycle management. What We’re Looking For 5+ years in data engineering/platform roles , with 2+ years running Databricks at scale . Expert in Apache Spark, PySpark, Delta Lake, Unity Catalog, MLflow . Strong Python engineering background with modern CI/CD pipelines. Proven experience optimising Databricks for performance and cost. Track record of leading teams More ❯
Central London / West End, London, United Kingdom Hybrid / WFH Options
Careerwise
end product lifecycle management. What We’re Looking For 5+ years in data engineering/platform roles , with 2+ years running Databricks at scale . Expert in Apache Spark, PySpark, Delta Lake, Unity Catalog, MLflow . Strong Python engineering background with modern CI/CD pipelines. Proven experience optimising Databricks for performance and cost. Track record of leading teams More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hexegic
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
City of London, London, United Kingdom Hybrid / WFH Options
KPMG UK
language processing or other relevant AI fields. Proven track record of designing, developing, and deploying AI systems in production environments. Proficient in Python and key ML libraries (e.g. PyTorch, PySpark, scikit-learn, Hugging Face Transformers). Hands-on experience with modern data platforms and AI tooling such as Azure ML, Databricks, MLflow, LangChain, LangGraph. Proven experience with modern engineering More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
KPMG UK
language processing or other relevant AI fields. Proven track record of designing, developing, and deploying AI systems in production environments. Proficient in Python and key ML libraries (e.g. PyTorch, PySpark, scikit-learn, Hugging Face Transformers). Hands-on experience with modern data platforms and AI tooling such as Azure ML, Databricks, MLflow, LangChain, LangGraph. Proven experience with modern engineering More ❯
Enforce GDPR-compliant governance and security * Optimize performance and cost of data workflows * Collaborate with teams to deliver clean, structured data Key Skills Required: * Azure data services, Python/PySpark/SQL, data modelling * Power BI (preferred), legal system familiarity (bonus) * Strong grasp of UK data regulations Certifications: * Microsoft certifications (e.g., Azure Data Engineer Associate, Azure Solutions Architect) desirable More ❯
and machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, Delta Lake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
london (city of london), south east england, united kingdom
Miryco Consultants Ltd
and machine learning use cases. Support the migration of legacy reporting tools into Databricks and modern BI solutions. Key Skills & Experience Essential: Strong hands-on experience with Databricks (SQL, PySpark, Delta Lake). Solid knowledge of BI and data visualisation tools (e.g., Power BI, Tableau, Qlik). Strong SQL and data modelling skills. Experience working with large, complex financial More ❯
also identifying transformational opportunities for sales productivity. 🔑 What You’ll Bring Must-Have Skills 5+ years in Sales Ops/Rev Ops roles Proficient in SQL and Python (Pandas, PySpark) Strong experience with cloud data platforms (Databricks, Snowflake, GCP) Background in building ETL/ELT solutions and data modelling Advanced Excel/Power BI/VBA skills 2+ years More ❯
london (city of london), south east england, united kingdom
RedCat Digital
also identifying transformational opportunities for sales productivity. 🔑 What You’ll Bring Must-Have Skills 5+ years in Sales Ops/Rev Ops roles Proficient in SQL and Python (Pandas, PySpark) Strong experience with cloud data platforms (Databricks, Snowflake, GCP) Background in building ETL/ELT solutions and data modelling Advanced Excel/Power BI/VBA skills 2+ years More ❯
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
london (city of london), south east england, united kingdom
Morela Solutions
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Signify Technology
with NLP, image classifiers, deep learning, or large language models. Familiarity with A/B testing and experiment design. Experience building shared ML systems or platforms. Knowledge of Databricks, PySpark, and cloud platforms (AWS/GCP/Azure). Benefits: Flexible hybrid working model Private medical insurance, healthcare cash plan, subsidised counselling/coaching, Employee Assistance Programme, and Mental More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Signify Technology
with NLP, image classifiers, deep learning, or large language models. Familiarity with A/B testing and experiment design. Experience building shared ML systems or platforms. Knowledge of Databricks, PySpark, and cloud platforms (AWS/GCP/Azure). Benefits: Flexible hybrid working model Private medical insurance, healthcare cash plan, subsidised counselling/coaching, Employee Assistance Programme, and Mental More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
london (city of london), south east england, united kingdom
Mastek
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive More ❯
london (city of london), south east england, united kingdom
Ubique Systems
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive More ❯