data retention and archival strategies in cloud environments. Strong understanding and practical implementation of Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing. Advanced programming skills in Python, PySpark, and SQL, with the ability to build modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake More ❯
london (city of london), south east england, united kingdom
HCLTech
data retention and archival strategies in cloud environments. Strong understanding and practical implementation of Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing. Advanced programming skills in Python, PySpark, and SQL, with the ability to build modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake More ❯
Databricks, or equivalent) Proficiency in ELT/ETL development using tools such as Data Factory, Dataflow Gen2, Databricks Workflows, or similar orchestration frameworks Experience with Python and/or PySpark for data transformation, automation, or pipeline development Familiarity with cloud services and deployment automation (e.g., Azure, AWS, Terraform, CI/CD, Git) Ability to deliver clear, insightful, and performant More ❯
engagement. * Drive innovation through advanced analytics and research-based problem solving. To be successful you should have: 10 years hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Previous experience in implementing best practices for data engineering, including data governance, data quality, and data security. Proficiency More ❯
complex data sets. Collaborate with data scientists to deploy machine learning models. Contribute to strategy, planning, and continuous improvement. Required Experience: Hands-on experience with AWS data tools: Glue, PySpark, Athena, Iceberg, Lake Formation. Strong Python and SQL skills for data processing and analysis. Deep understanding of data governance, quality, and security. Knowledge of market data and its business More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hexegic
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
City of London, London, United Kingdom Hybrid / WFH Options
KPMG UK
language processing or other relevant AI fields. Proven track record of designing, developing, and deploying AI systems in production environments. Proficient in Python and key ML libraries (e.g. PyTorch, PySpark, scikit-learn, Hugging Face Transformers). Hands-on experience with modern data platforms and AI tooling such as Azure ML, Databricks, MLflow, LangChain, LangGraph. Proven experience with modern engineering More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
KPMG UK
language processing or other relevant AI fields. Proven track record of designing, developing, and deploying AI systems in production environments. Proficient in Python and key ML libraries (e.g. PyTorch, PySpark, scikit-learn, Hugging Face Transformers). Hands-on experience with modern data platforms and AI tooling such as Azure ML, Databricks, MLflow, LangChain, LangGraph. Proven experience with modern engineering More ❯
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
london (city of london), south east england, united kingdom
Morela Solutions
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
individuals across 100 countries and has a reach of 600 million users, is recruiting an MLOps Engineer who has Chatbot (Voice) integration project experience using Python, Pytorch, Pyspark and AWS LLM/Generative AI. Our client is paying £400 PD Outside IR 35 to start ASAP for an initial 6-month contract on a hybrid basis based near Stratford More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
Central London, London, United Kingdom Hybrid / WFH Options
iDPP
someone who enjoys building scalable data solutions while staying close to business impact. The Role As a Data Analytics Engineer , youll design, build, and maintain reliable data pipelinesprimarily using PySpark, SQL, and Python to ensure business teams (analysts, product managers, finance, operations) have access to well-modeled, actionable data. Youll work closely with stakeholders to translate business needs into … spend more time coding, managing data infrastructure, and ensuring pipeline reliability. Who Were Looking For Data Analytics : Analysts who have strong experience building and maintaining data pipelines (particularly in PySpark/SQL ) and want to work on production-grade infrastructure. Data Engineering : Engineers who want to work more closely with business stakeholders and enable analytics-ready data solutions. Analytics … Professionals who already operate in this hybrid space, with proven expertise across big data environments, data modeling, and business-facing delivery. Key Skills & Experience Strong hands-on experience with PySpark, SQL, and Python Proven track record of building and maintaining data pipelines Ability to translate business requirements into robust data models and solutions Experience with data validation, quality checks More ❯
Data Engineer - PySpark - Palantir - London - £75K Join a dynamic team dedicated to innovative data solutions in Soho! We are seeking a Data Engineer with expertise in PySpark to play a pivotal role as the team and business grow. This permanent, hybrid position offers a unique opportunity to contribute to impactful projects while enjoying a supportive and collaborative work … environment. Key Responsibilities: - Develop and optimise data pipelines using PySpark to ensure efficient data processing - Collaborate with cross-functional teams to deliver high-quality, data-driven insights - Engage in continuous learning and implement best practices in data management - Participate in the deployment and maintenance of data solutions within the Microsoft ecosystem The ideal candidate will have strong analytical skills More ❯