PySpark Jobs in Central London

1 to 25 of 35 PySpark Jobs in Central London

Senior Data Engineer

City of London, London, United Kingdom
Mars
alignment and shared value creation. As a Data Engineer in the Commercial team, your key responsibilities are as follows: 1. Technical Proficiency: Collaborate in hands-on development using Python, PySpark, and other relevant technologies to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing … technologies like Databricks, Spark, Delta Lake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the implementation of DevOps and CI/CD methodologies to foster agile collaboration and contribute to building robust data solutions. Develop code that adheres to high-quality … Data Engineering Lead, actively participating in team discussions and sharing ideas to improve platform excellence. What are we looking for? Great experience as a Senior Data Engineer. Experience with PySpark, SQL and Databricks. Proficiency in working with the cloud environment and various platforms, including Azure, SQL Server. NoSQL databases is good to have. Hands-on experience with data pipeline More ❯
Posted:

Senior Data Engineer

City of London, London, United Kingdom
Mastek
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
Posted:

Data Engineer

City of London, London, United Kingdom
Mars
in Microsoft Azure cloud technologies Strong inclination to learn and adapt to new technologies and languages. What will be your key responsibilities? Collaborate in hands-on development using Python, PySpark, and other relevant technologies to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing … technologies like Databricks, Spark, Delta Lake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the implementation of DevOps and CI/CD methodologies to foster agile collaboration and contribute to building robust data solutions. Collaborate with the team to learn and More ❯
Posted:

Data Engineer - Pyspark / Palantir

City of London, England, United Kingdom
Whitehall Resources Ltd
Social network you want to login/join with: Data Engineer - Pyspark/Palantir, City of London col-narrow-left Location: City of London, United Kingdom Job Category: Information Technology EU work permit required: Yes col-narrow-right Job Reference: BBBH63893_1748355123 Job Views: 5 Posted: 27.05.2025 Expiry Date: 11.07.2025 col-wide Job Description: Data Engineer - Pyspark/… Palantir Whitehall Resources require a Data Engineer with experience with Pyspark & Palantir to work with a key client on an initial 6 month contract. *Inside IR35. *This role will require on site work in London 2-3 days per week. Data Engineer - Pyspark/Palantir Key responsibilities: • Developing Data Stores and Data Warehouse solutions • Design and develop data … in Agile methodology • Design and deliver quality solutions independently • Leading a team of Data Engineers and deliver solutions as a team Key skills/knowledge/experience: • Proficient in PySpark, Python, SQL with atleast 5 years of experience • Working experience in Palantir Foundry platform is must • Experience designing and implementing data analytics solutions on enterprise data platforms and distributed More ❯
Posted:

Lead Architect

City of London, London, United Kingdom
Fractal
technology experience Strong experience in System Integration, Application Development or Data-Warehouse projects, across technologies used in the enterprise space. Software development experience using: Object-oriented languages (e.g., Python, PySpark,) and frameworks Stakeholder Management Expertise in relational and dimensional modelling, including big data technologies. Exposure across all the SDLC process, including testing and deployment. Expertise in Microsoft Azure is More ❯
Posted:

Lead Engineer

City of London, London, United Kingdom
Fractal
technology experience Strong experience in System Integration, Application Development or Data-Warehouse projects, across technologies used in the enterprise space. Software development experience using: Object-oriented languages (e.g., Python, PySpark,) and frameworks Stakeholder Management Expertise in relational and dimensional modelling, including big data technologies. Exposure across all the SDLC process, including testing and deployment. Expertise in Microsoft Azure is More ❯
Posted:

Machine Learning Engineer with Data Engineering expertise

City of London, London, United Kingdom
Tadaweb
in both data engineering and machine learning, with a strong portfolio of relevant projects. Proficiency in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL, and data modeling. Familiarity with More ❯
Posted:

Data Engineer

City of London, London, United Kingdom
Hybrid / WFH Options
Bounce Digital
from internal (Odoo/PostgreSQL) and external (eBay APIs) sources Define data quality rules, set up monitoring/logging, and support architecture decisions What You Bring Strong SQL & Python (PySpark); hands-on with GCP or AWS Experience with modern ETL tools (dbt, Airflow, Fivetran) BI experience (Looker, Power BI, Metabase); Git and basic CI/CD exposure Background in More ❯
Posted:

Senior Data Engineer

City of London, London, United Kingdom
Xcede
members Drive platform improvements through DevOps and Infrastructure-as-Code (ideally using Terraform) Take ownership of system observability, stability, and documentation Requirements Strong experience in Python (especially Pandas and PySpark) and SQL Proven expertise in building data pipelines and working with Databricks and Lakehouse environments Deep understanding of Azure (or similar cloud platforms), including Virtual Networks and secure data More ❯
Posted:

Data Architect (GCP)

City of London, England, United Kingdom
Hybrid / WFH Options
JR United Kingdom
designing and maintaining large-scale data warehouses and data lakes. Expertise in GCP data services including BigQuery, Composer, Dataform, DataProc, and Pub/Sub. Strong programming experience with Python, PySpark, and SQL. Hands-on experience with data modelling, ETL processes, and data quality frameworks. Proficiency with BI/reporting tools such as Looker or PowerBI. Excellent communication and stakeholder More ❯
Posted:

Data Engineer – Python | Databricks | PySpark

City of London, London, United Kingdom
Hybrid / WFH Options
DATAHEAD
Data Engineer – Python | Databricks | PySpark Company: Fortune 500 Financial Services firm Location: Hybrid - London Type: Permanent Salary: £90k + 20% bonus + Exceptional Benefits Exclusively via DATAHEAD Are you a detail-oriented Python Developer who thrives in complex data environments? Do you have hands-on experience with Databricks , PySpark , and cloud-native data engineering? We’re hiring a … led environment where you’ll play a key role in developing high-quality, scalable data products. What You’ll Do: Build and maintain scalable Python applications using Databricks and PySpark Design and optimise robust data pipelines and processing frameworks Write clean, modular, and testable code aligned with SOLID principles Contribute to CI/CD pipelines , automated testing frameworks, and … the data architecture that underpins machine learning and AI models What You’ll Bring: Proven experience developing in Python for data-intensive applications Hands-on expertise with Databricks and PySpark Strong grasp of cloud data platforms and modern engineering practices Familiarity with CI/CD, version control (e.g. Git), and automated testing Ability to work effectively in cross-border More ❯
Posted:

Data Engineer

City of London, London, United Kingdom
Hybrid / WFH Options
Mars
pet owners everywhere. Join us on a multi-year digital transformation journey where your work will unlock real impact. 🌟 What you'll do Build robust data pipelines using Python, PySpark, and cloud-native tools Engineer scalable data models with Databricks, Delta Lake, and Azure tech Collaborate with analysts, scientists, and fellow engineers to deliver insights Drive agile DevOps practices More ❯
Posted:

Data Engineer

City of London, London, United Kingdom
Hybrid / WFH Options
Recruit with Purpose
they modernise the use of their data. Overview of responsibilities in the role: Design and maintain scalable, high-performance data pipelines using Azure Data Platform tools such as Databricks (PySpark), Data Factory, and Data Lake Gen2. Develop curated data layers (bronze, silver, gold) optimised for analytics, reporting, and AI/ML, ensuring they meet performance, governance, and reuse standards. More ❯
Posted:

Data Engineer

City Of London, England, United Kingdom
Hybrid / WFH Options
Pioneer Search
Data Engineer Azure | Databricks | PySpark | Hybrid Cloud | Fabric Location: London (Hybrid) Salary: £85,000 + Bonus + Benefits Type: Permanent A Data Engineer is required for a fast-evolving (re)insurance business at the heart of the Lloyd's market, currently undergoing a major data transformation. With a strong foundation in the industry and a clear vision for the … for a Data Engineer to join their growing team. This is a hands-on role focused on building scalable data pipelines and enhancing a modern Lakehouse architecture using Databricks , PySpark , and Azure . The environment is currently hybrid cloud and on-prem , with a strategic move towards Microsoft Fabric -so experience across both is highly valued. What you'll … be doing: Building and maintaining robust data pipelines using Databricks , PySpark , and Azure Data Factory . Enhance and maintain a Lakehouse architecture using Medallion principles Working across both cloud and on-prem environments , supporting the transition to Microsoft Fabric . Collaborating with stakeholders across Underwriting, Actuarial, and Finance to deliver high-impact data solutions. Support DevOps practices and CI More ❯
Posted:

GCP Data Engineer

City of London, London, United Kingdom
Anson McCade
practice Essential Experience: Proven expertise in building data warehouses and ensuring data quality on GCP Strong hands-on experience with BigQuery, Dataproc, Dataform, Composer, Pub/Sub Skilled in PySpark, Python and SQL Solid understanding of ETL/ELT processes Clear communication skills and ability to document processes effectively Desirable Skills: GCP Professional Data Engineer certification Exposure to Agentic More ❯
Posted:

ETL/ ELT Engineering SME

City of London, London, United Kingdom
Tata Consultancy Services
Create solutions and environments to enable Analytics and Business Intelligence capabilities. Your Profile Essential skills/knowledge/experience: Design, develop, and maintain scalable ETL pipelines using AWS Glue (PySpark) . Strong hands-on experience with DBT (Cloud or Core) . Implement and manage DBT models for data transformation and modeling in a modern data stack. Proficiency in SQL … Python , and PySpark . Experience with AWS services such as S3, Athena, Redshift, Lambda, and CloudWatch. Familiarity with data warehousing concepts and modern data stack architectures. Experience with CI/CD pipelines and version control (e.g., Git). Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Optimize data workflows for performance, scalability, and cost More ❯
Posted:

Data Scientist

City of London, London, United Kingdom
Synechron
workflow solutions that provide long-term scalability, reliability, and performance, and integration with reporting. Required Skills: Expertise and hands-on experience in advanced programming using: SAS/Python/pySpark and SQL for data mining; additional experience and knowledge of Big Data tools preferred. Excellent Python programming skills, including experience with relevant analytical and machine learning libraries (e.g., pandas More ❯
Posted:

Senior Data Engineer London Hybrid(6+ Years)

City of London, Greater London, UK
Hybrid / WFH Options
Databuzz Ltd
As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data architecture and infrastructure. The successful candidate should possess a strong foundation in Python, Pyspark, SQL, and ETL processes, with a demonstrated ability to implement solutions in a cloud environment. Position - Sr Data Engineer Experience - 6-9 Years Location - London Job Type - Hybrid, Permanent … Mandatory Skills: Design, build, maintain data pipelines using Python, Pyspark and SQL Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS/AZURE/GCP . Collaborate with data scientists, business analysts to understand their data needs & develop solutions that meet their requirements. Develop & maintain data models and data dictionaries … improve the performance and scalability of our data solutions. Qualifications: Minimum 6+ years of Total experience. At least 4+ years of Hands on Experience using The Mandatory skills - Python, Pyspark, SQL. More ❯
Posted:

Senior Data Engineer London Hybrid(6+ Years)

City of London, England, United Kingdom
Hybrid / WFH Options
JR United Kingdom
As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data architecture and infrastructure. The successful candidate should possess a strong foundation in Python, Pyspark, SQL, and ETL processes, with a demonstrated ability to implement solutions in a cloud environment. Experience - 6-9 Years Location - London Job Type - Hybrid, Permanent Mandatory Skills: Design, build … maintain data pipelines using Python, Pyspark and SQL Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS/AZURE/GCP . Collaborate with data scientists, business analysts to understand their data needs & develop solutions that meet their requirements. Develop & maintain data models and data dictionaries for our data warehouse. … improve the performance and scalability of our data solutions. Qualifications: Minimum 6+ years of Total experience. At least 4+ years of Hands on Experience using The Mandatory skills - Python, Pyspark, SQL. #J-18808-Ljbffr More ❯
Posted:

Senior Data Engineer

City of London, Greater London, UK
Formula Recruitment
Promote clean, efficient, and maintainable coding practices. Required Technical Skills: Proven experience in data warehouse architecture and implementation. Expertise in designing and configuring Azure-based deployment pipelines. SQL, Python, PySpark Azure Data Lake+ Databricks Traditional ETL tool This is an excellant opportunity for a talented Senior Data Engineer to join a business who are looking to build a best More ❯
Posted:

Data Engineer

City of London, London, United Kingdom
Hybrid / WFH Options
Intec Select
complex ideas. Proven ability to manage multiple projects and meet deadlines in dynamic environments. Proficiency with SQL Server in high-transaction settings. Experience with either C# or Python/PySpark for data tasks. Hands-on knowledge of Azure cloud services, such as Databricks, Event Hubs, and Function Apps. Solid understanding of DevOps principles and tools like Git, Azure DevOps More ❯
Posted:

Software Developer

City of London, London, United Kingdom
Kantar Media
As people increasingly move across channels and platforms, Kantar Media’s data and audience measurement, targeting, analytics and advertising intelligence services unlock insights to inform powerful decision-making. Working with panel and first-party data in over 80 countries, we More ❯
Posted:

Data Engineer

City of London, London, United Kingdom
Hybrid / WFH Options
La Fosse
processes using AWS, Snowflake, etc. Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, Spark SQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong communicator and team player What More ❯
Posted:

Software Development Team Lead

City of London, London, United Kingdom
Kantar Media
Leverage Azure services extensively, particularly Azure Storage, for scalable cloud solutions. Ensure seamless integration with AWS S3 and implement secure data encryption/decryption practices. Python Implementation: Utilize Python, Pyspark for processing large datasets and integrating with cloud-based data solutions. Team Leadership: Manage and mentor a team of 3 engineers, fostering best practices in software development and code … and optimize workflows, ensuring efficient and reliable operations. Required 5-7 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security, including encryption/decryption. Proven More ❯
Posted:

Senior Software Developer

City of London, London, United Kingdom
Kantar Media
Leverage Azure services extensively, particularly Azure Storage, for scalable cloud solutions. Ensure seamless integration with AWS S3 and implement secure data encryption/decryption practices. Python Implementation: Utilize Python, Pyspark for processing large datasets and integrating with cloud-based data solutions. Team Leadership: Code review and mentor a team of 3 engineers, fostering best practices in software development and … and optimize workflows, ensuring efficient and reliable operations. Required 5-7 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security, including encryption/decryption. Proven More ❯
Posted:
PySpark
Central London
10th Percentile
£77,500
25th Percentile
£82,500
Median
£85,000
75th Percentile
£93,875
90th Percentile
£111,500