platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
in Microsoft Azure cloud technologies Strong inclination to learn and adapt to new technologies and languages. What will be your key responsibilities? Collaborate in hands-on development using Python, PySpark, and other relevant technologies to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing … technologies like Databricks, Spark, Delta Lake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the implementation of DevOps and CI/CD methodologies to foster agile collaboration and contribute to building robust data solutions. Collaborate with the team to learn and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Noir
Data Engineer - Leading Energy Company - London (Tech Stack: Data Engineer, Databricks, Python, PySpark, Power BI, AWS QuickSight, AWS, TSQL, ETL, Agile Methodologies) Company Overview: Join a dynamic team, a leading player in the energy sector, committed to innovation and sustainable solutions. Our client are seeking a talented Data Engineer to help build and optimise our data infrastructure, enabling them More ❯
technology experience Strong experience in System Integration, Application Development or Data-Warehouse projects, across technologies used in the enterprise space. Software development experience using: Object-oriented languages (e.g., Python, PySpark,) and frameworks Stakeholder Management Expertise in relational and dimensional modelling, including big data technologies. Exposure across all the SDLC process, including testing and deployment. Expertise in Microsoft Azure is More ❯
technology experience Strong experience in System Integration, Application Development or Data-Warehouse projects, across technologies used in the enterprise space. Software development experience using: Object-oriented languages (e.g., Python, PySpark,) and frameworks Stakeholder Management Expertise in relational and dimensional modelling, including big data technologies. Exposure across all the SDLC process, including testing and deployment. Expertise in Microsoft Azure is More ❯
in both data engineering and machine learning, with a strong portfolio of relevant projects. Proficiency in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL, and data modeling. Familiarity with More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Bounce Digital
from internal (Odoo/PostgreSQL) and external (eBay APIs) sources Define data quality rules, set up monitoring/logging, and support architecture decisions What You Bring Strong SQL & Python (PySpark); hands-on with GCP or AWS Experience with modern ETL tools (dbt, Airflow, Fivetran) BI experience (Looker, Power BI, Metabase); Git and basic CI/CD exposure Background in More ❯
members Drive platform improvements through DevOps and Infrastructure-as-Code (ideally using Terraform) Take ownership of system observability, stability, and documentation Requirements Strong experience in Python (especially Pandas and PySpark) and SQL Proven expertise in building data pipelines and working with Databricks and Lakehouse environments Deep understanding of Azure (or similar cloud platforms), including Virtual Networks and secure data More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Mars
pet owners everywhere. Join us on a multi-year digital transformation journey where your work will unlock real impact. 🌟 What you'll do Build robust data pipelines using Python, PySpark, and cloud-native tools Engineer scalable data models with Databricks, Delta Lake, and Azure tech Collaborate with analysts, scientists, and fellow engineers to deliver insights Drive agile DevOps practices More ❯
processes Develop dashboards and visualizations Work closely with data scientists and stakeholders Follow CI/CD and code best practices (Git, testing, reviews) Tech Stack & Experience: Strong Python (Pandas), PySpark, and SQL skills Cloud data tools (Azure Data Factory, Synapse, Databricks, etc.) Data integration experience across formats and platforms Strong communication and data literacy Nice to Have: Commodities/ More ❯
a focus on data quality at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or Power BI. Excellent communication skills More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Recruit with Purpose
they modernise the use of their data. Overview of responsibilities in the role: Design and maintain scalable, high-performance data pipelines using Azure Data Platform tools such as Databricks (PySpark), Data Factory, and Data Lake Gen2. Develop curated data layers (bronze, silver, gold) optimised for analytics, reporting, and AI/ML, ensuring they meet performance, governance, and reuse standards. More ❯
City Of London, England, United Kingdom Hybrid / WFH Options
Pioneer Search
Data Engineer Azure | Databricks | PySpark | Hybrid Cloud | Fabric Location: London (Hybrid) Salary: £85,000 + Bonus + Benefits Type: Permanent A Data Engineer is required for a fast-evolving (re)insurance business at the heart of the Lloyd's market, currently undergoing a major data transformation. With a strong foundation in the industry and a clear vision for the … for a Data Engineer to join their growing team. This is a hands-on role focused on building scalable data pipelines and enhancing a modern Lakehouse architecture using Databricks , PySpark , and Azure . The environment is currently hybrid cloud and on-prem , with a strategic move towards Microsoft Fabric -so experience across both is highly valued. What you'll … be doing: Building and maintaining robust data pipelines using Databricks , PySpark , and Azure Data Factory . Enhance and maintain a Lakehouse architecture using Medallion principles Working across both cloud and on-prem environments , supporting the transition to Microsoft Fabric . Collaborating with stakeholders across Underwriting, Actuarial, and Finance to deliver high-impact data solutions. Support DevOps practices and CI More ❯
practice Essential Experience: Proven expertise in building data warehouses and ensuring data quality on GCP Strong hands-on experience with BigQuery, Dataproc, Dataform, Composer, Pub/Sub Skilled in PySpark, Python and SQL Solid understanding of ETL/ELT processes Clear communication skills and ability to document processes effectively Desirable Skills: GCP Professional Data Engineer certification Exposure to Agentic More ❯
Create solutions and environments to enable Analytics and Business Intelligence capabilities. Your Profile Essential skills/knowledge/experience: Design, develop, and maintain scalable ETL pipelines using AWS Glue (PySpark) . Strong hands-on experience with DBT (Cloud or Core) . Implement and manage DBT models for data transformation and modeling in a modern data stack. Proficiency in SQL … Python , and PySpark . Experience with AWS services such as S3, Athena, Redshift, Lambda, and CloudWatch. Familiarity with data warehousing concepts and modern data stack architectures. Experience with CI/CD pipelines and version control (e.g., Git). Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Optimize data workflows for performance, scalability, and cost More ❯
City of London, London, United Kingdom Hybrid / WFH Options
un:hurd music
Integration : Develop and integrate efficient data pipelines by collecting high-quality, consistent data from external APIs and ensuring seamless incorporation into existing systems. Big Data Management and Storage : Utilize PySpark for scalable processing of large datasets, implementing best practices for distributed computing. Optimize data storage and querying within a data lake environment to enhance accessibility and performance. ML R More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Databuzz Ltd
As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data architecture and infrastructure. The successful candidate should possess a strong foundation in Python, Pyspark, SQL, and ETL processes, with a demonstrated ability to implement solutions in a cloud environment. Position - Sr Data Engineer Experience - 6-9 Years Location - London Job Type - Hybrid, Permanent … Mandatory Skills: Design, build, maintain data pipelines using Python, Pyspark and SQL Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS/AZURE/GCP . Collaborate with data scientists, business analysts to understand their data needs & develop solutions that meet their requirements. Develop & maintain data models and data dictionaries … improve the performance and scalability of our data solutions. Qualifications: Minimum 6+ years of Total experience. At least 4+ years of Hands on Experience using The Mandatory skills - Python, Pyspark, SQL. More ❯
Promote clean, efficient, and maintainable coding practices. Required Technical Skills: Proven experience in data warehouse architecture and implementation. Expertise in designing and configuring Azure-based deployment pipelines. SQL, Python, PySpark Azure Data Lake+ Databricks Traditional ETL tool This is an excellant opportunity for a talented Senior Data Engineer to join a business who are looking to build a best More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Intec Select
complex ideas. Proven ability to manage multiple projects and meet deadlines in dynamic environments. Proficiency with SQL Server in high-transaction settings. Experience with either C# or Python/PySpark for data tasks. Hands-on knowledge of Azure cloud services, such as Databricks, Event Hubs, and Function Apps. Solid understanding of DevOps principles and tools like Git, Azure DevOps More ❯
City of London, London, United Kingdom Hybrid / WFH Options
La Fosse
processes using AWS, Snowflake, etc. Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, Spark SQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong communicator and team player What More ❯
and optimize workflows, ensuring efficient and reliable operations. Required: 6-10 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security, including encryption/decryption. Proven More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Syntax Consultancy Limited
using CI/CD, along with proficiency in designing and implementing CI/CD pipelines in Cloud environments. Excellent practical expertise in Performance tuning and system optimisation. Experience with PySpark and Azure Databricks for distributed data processing and large-scale data analysis. Proven experience with web frameworks , including knowledge of Django and experience with Flask, along with a solid More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Ampstek
know your rate expectation. Role: Technology Lead Location: London UK (Hybrid | 3 days/week working from office) Duration: 6 months Contract Skill Sets: • Palantir Foundry • ETL • Spark • Python • PySpark • Informatica • AWS • SQL, PLSQL • Shell scripting • Data Lake • Data warehousing • Scala • Oracle • MS SQL Server • PowerBI. Thanks & Regards Milan | Talent Acquisition | Europe & UK Ampstek Services Limited Kemp House More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Burns Sheehan
Data Engineering Manager 💰 £110,000-£115,000 + 10% bonus 🖥️ Databricks, Snowflake, Terraform, Pyspark, Azure 🌍 London, hybrid working (2 days in office) 🏠 Leading property data & risk software company We are partnered with a leading property data & risk software company who contribute valuations, insights and decisioning technology to over 1 million mortgage approvals each year. They are looking for a … affects bottom line. You are driving their business forward, not just helping them make decisions but opening to the door to make better decisions. Tech Stack : Databricks, Azure, Python, Pyspark, Terraform. What's in it for you... 7.5% pension contribution by the company Discretionary annual bonus up to 10% of base salary 25 days annual leave + extra days … Financial Support Free Calm App membership Gym on-site Cycle to work and electric car schemes Data Engineering Manager 💰 £110,000-£115,000 + 10% bonus 🖥️ Databricks, Snowflake, Terraform, Pyspark, Azure 🌍 London, hybrid working (2 days in office) 🏠 Leading property data & risk software company More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions
deep learning, or statistical modeling. Strong hands-on experience with ML frameworks (PyTorch, TensorFlow, Keras). Proficiency in Python and C/C++. Experience with scalable data tools (e.g., PySpark, Kubernetes, Databricks, Apache Arrow). Proven ability to manage GPU-intensive data processing jobs. 4+ years of applied research or industry experience. Creative problem-solver with a bias for More ❯