Permanent PySpark Jobs in the UK

1 to 25 of 135 Permanent PySpark Jobs in the UK

Data Engineer

London, United Kingdom
Sandtech
Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers create pipelines that support More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

London, United Kingdom
Sandtech
Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers create pipelines that support More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer (Remote)

South East, United Kingdom
Hybrid / WFH Options
Circana
team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … make a significant impact, we encourage you to apply! Job Responsibilities ETL/ELT Pipeline Development: Design, develop, and optimize efficient and scalable ETL/ELT pipelines using Python, PySpark, and Apache Airflow. Implement batch and real-time data processing solutions using Apache Spark. Ensure data quality, governance, and security throughout the data lifecycle. Cloud Data Engineering: Manage and … and documentation. Required profile: Requirements Client facing role so strong communication and collaboration skills are vital Proven experience in data engineering, with hands-on expertise in Azure Data Services, PySpark, Apache Spark, and Apache Airflow. Strong programming skills in Python and SQL, with the ability to write efficient and maintainable code. Deep understanding of Spark internals, including RDDs, DataFrames More ❯
Employment Type: Permanent
Posted:

Senior Data Engineer

London, United Kingdom
Hybrid / WFH Options
Scott Logic Ltd
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured, maintainable systems. Strong communication skills More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Databricks Solution Architect Champion

England, United Kingdom
TechYard
data workloads. Mentor engineering teams and support architectural decisions as a recognised Databricks expert. Essential Skills & Experience: Demonstrable expertise with Databricks and Apache Spark in production environments. Proficiency in PySpark, SQL, and working within one or more cloud platforms (Azure, AWS, or GCP). In-depth understanding of Lakehouse concepts, medallion architecture, and modern data warehousing. Experience with version More ❯
Posted:

Senior Data Engineer

London, United Kingdom
Mars, Incorporated and its Affiliates
alignment and shared value creation. As a Data Engineer in the Commercial team, your key responsibilities are as follows: 1. Technical Proficiency: Collaborate in hands-on development using Python, PySpark, and other relevant technologies to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing … technologies like Databricks, Spark, Delta Lake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the implementation of DevOps and CI/CD methodologies to foster agile collaboration and contribute to building robust data solutions. Develop code that adheres to high-quality … ideas to improve platform excellence. As a Data Engineer in the Commercial team, your key responsibilities are as follows: 1. Technical Proficiency: Collaborate in hands-on development using Python, PySpark, and other relevant technologies to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Lead Data Engineer (Remote)

South East, United Kingdom
Hybrid / WFH Options
Circana
UK. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … desire to make a significant impact, we encourage you to apply! Job Responsibilities Data Engineering & Data Pipeline Development Design, develop, and optimize scalable DATA workflows using Python, PySpark, and Airflow Implement real-time and batch data processing using Spark Enforce best practices for data quality, governance, and security throughout the data lifecycle Ensure data availability, reliability and performance through … Implement CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Build and optimize large-scale data processing pipelines using Apache Spark and PySpark Implement data partitioning, caching, and performance tuning for Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning initiatives. Workflow Orchestration More ❯
Employment Type: Permanent
Posted:

Senior Data Engineer Global Trading Technology Firm

London, United Kingdom
Out in Science, Technology, Engineering, and Mathematics
data governance processes. Requirements: 5+ years of experience in data engineering, with a strong focus on building scalable data platforms. Proficiency in Python and modern data libraries (e.g. Pandas, PySpark, Dask). Strong SQL skills and experience with cloud-native data tools (AWS, GCP, or Azure). Hands-on experience with tools like Airflow, Spark, Kafka, or Snowflake. Experience More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineer

London, United Kingdom
Harvey Nash Group
across the team. Skills & Experience Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL, with advanced query optimisation skills. Proven experience building scalable ETL pipelines and managing data transformations. Familiarity with data quality frameworks and monitoring tools. Experience working with Git More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineer

London, United Kingdom
Hybrid / WFH Options
Veeva Systems, Inc
cooperation with our data science team Experiment in your domain to improve precision, recall, or cost savings Requirements Expert skills in Java or Python Experience with Apache Spark or PySpark Experience writing software for the cloud (AWS or GCP) Speaking and writing in English enables you to take part in day-to-day conversations in the team and contribute More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineer Cyprus, Remote, United Kingdom, Remote

London, United Kingdom
Hybrid / WFH Options
Ocean Finance
Head of Data Platform and Services, you'll not only maintain and optimize our data infrastructure but also spearhead its evolution. Built predominantly on Databricks, and utilizing technologies like Pyspark and Delta Lake, our infrastructure is designed for scalability, robustness, and efficiency. You'll take charge of developing sophisticated data integrations with various advertising platforms, empowering our teams with … and informed decision-making What you'll be doing for us Leadership in Design and Development : Lead in the architecture, development, and upkeep of our Databricks-based infrastructure, harnessing Pyspark and Delta Lake. CI/CD Pipeline Mastery : Create and manage CI/CD pipelines, ensuring automated deployments and system health monitoring. Advanced Data Integration : Develop sophisticated strategies for … standards. Data-Driven Culture Champion : Advocate for the strategic use of data across the organization. Skills-wise, you'll definitely: Expertise in Apache Spark Advanced proficiency in Python and Pyspark Extensive experience with Databricks Advanced SQL knowledge Proven leadership abilities in data engineering Strong experience in building and managing CI/CD pipelines. Experience in implementing data integrations with More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

Glasgow, United Kingdom
SSE plc
you'll architect and optimise our cloud data infrastructure-ensuring high-quality, accessible data for operational and strategic use. - Build and maintain production-grade data pipelines using Databricks (Python, PySpark, Delta Live Tables, Unity Catalog) to serve downstream analytics and reporting. - Optimise data architecture for performance, scalability, and reliability, proactively monitoring data health and troubleshooting pipeline issues. - Mentor junior … CI/CD, and DevSecOps alignment. You Have - Proven experience building modern data pipelines in Azure Databricks, including asset bundles, Unity Catalog, and Delta Live Tables. - Strong Python and PySpark skills, alongside advanced SQL proficiency for query performance, data modelling, and transformation logic. - Hands-on experience with version control (Git), CI/CD pipelines, and Agile development workflows in More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

Leeds, Yorkshire, United Kingdom
SSE plc
you'll architect and optimise our cloud data infrastructure-ensuring high-quality, accessible data for operational and strategic use. - Build and maintain production-grade data pipelines using Databricks (Python, PySpark, Delta Live Tables, Unity Catalog) to serve downstream analytics and reporting. - Optimise data architecture for performance, scalability, and reliability, proactively monitoring data health and troubleshooting pipeline issues. - Mentor junior … CI/CD, and DevSecOps alignment. You Have - Proven experience building modern data pipelines in Azure Databricks, including asset bundles, Unity Catalog, and Delta Live Tables. - Strong Python and PySpark skills, alongside advanced SQL proficiency for query performance, data modelling, and transformation logic. - Hands-on experience with version control (Git), CI/CD pipelines, and Agile development workflows in More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

Perth, Perth & Kinross, United Kingdom
SSE plc
you'll architect and optimise our cloud data infrastructure-ensuring high-quality, accessible data for operational and strategic use. - Build and maintain production-grade data pipelines using Databricks (Python, PySpark, Delta Live Tables, Unity Catalog) to serve downstream analytics and reporting. - Optimise data architecture for performance, scalability, and reliability, proactively monitoring data health and troubleshooting pipeline issues. - Mentor junior … CI/CD, and DevSecOps alignment. You Have - Proven experience building modern data pipelines in Azure Databricks, including asset bundles, Unity Catalog, and Delta Live Tables. - Strong Python and PySpark skills, alongside advanced SQL proficiency for query performance, data modelling, and transformation logic. - Hands-on experience with version control (Git), CI/CD pipelines, and Agile development workflows in More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

Havant, Hampshire, United Kingdom
SSE plc
you'll architect and optimise our cloud data infrastructure-ensuring high-quality, accessible data for operational and strategic use. - Build and maintain production-grade data pipelines using Databricks (Python, PySpark, Delta Live Tables, Unity Catalog) to serve downstream analytics and reporting. - Optimise data architecture for performance, scalability, and reliability, proactively monitoring data health and troubleshooting pipeline issues. - Mentor junior … CI/CD, and DevSecOps alignment. You Have - Proven experience building modern data pipelines in Azure Databricks, including asset bundles, Unity Catalog, and Delta Live Tables. - Strong Python and PySpark skills, alongside advanced SQL proficiency for query performance, data modelling, and transformation logic. - Hands-on experience with version control (Git), CI/CD pipelines, and Agile development workflows in More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

Reading, Berkshire, United Kingdom
Equal Approach Ltd
Engineer, youll architect and optimise our cloud data infrastructure-ensuring high-quality, accessible data for operational and strategic use. - Build and maintain production-grade data pipelines using Databricks (Python, PySpark, Delta Live Tables, Unity Catalog) to serve downstream analytics and reporting. - Optimise data architecture for performance, scalability, and reliability, proactively monitoring data health and troubleshooting pipeline issues. - Mentor junior … CI/CD, and DevSecOps alignment. You Have - Proven experience building modern data pipelines in Azure Databricks, including asset bundles, Unity Catalog, and Delta Live Tables. - Strong Python and PySpark skills, alongside advanced SQL proficiency for query performance, data modelling, and transformation logic. - Hands-on experience with version control (Git), CI/CD pipelines, and Agile development workflows in More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Software Engineer Python PySpark

City of London, London, United Kingdom
Hybrid / WFH Options
Client Server
Data Software Engineer (Python PySpark) Remote UK to £95k Are you a data savvy Software Engineer with strong Python coding skills? You could be progressing your career in a senior, hands-on Data Software Engineer role as part of a friendly and supportive international team at a growing and hugely successful European car insurance tech company as they expand … on your location/preferences. About you: You are degree educated in a relevant discipline, e.g. Computer Science, Mathematics You have a software engineering background with advanced Python and PySpark coding skills You have experience in batch, distributed data processing and near real-time streaming data pipelines with technologies such as Kafka You have experience of Big Data Analytics More ❯
Employment Type: Permanent, Work From Home
Salary: £95,000
Posted:

Data Software Engineer Python PySpark

Birmingham, West Midlands, England, United Kingdom
Hybrid / WFH Options
Client Server Ltd
Data Software Engineer (Python PySpark) Remote UK to £95k Are you a data savvy Software Engineer with strong Python coding skills? You could be progressing your career in a senior, hands-on Data Software Engineer role as part of a friendly and supportive international team at a growing and hugely successful European car insurance tech company as they expand … on your location/preferences. About you: You are degree educated in a relevant discipline, e.g. Computer Science, Mathematics You have a software engineering background with advanced Python and PySpark coding skills You have experience in batch, distributed data processing and near real-time streaming data pipelines with technologies such as Kafka You have experience of Big Data Analytics More ❯
Employment Type: Full-Time
Salary: £80,000 - £95,000 per annum
Posted:

Senior Data Engineer (databricks)

Reading, Berkshire, South East, United Kingdom
IO Associates
both greenfield initiatives and enhancing high-traffic financial applications. Key Skills & Experience: Strong hands-on experience with Databricks , Delta Lake , Spark Structured Streaming , and Unity Catalog Advanced Python/PySpark and big data pipeline development Familiar with event streaming tools ( Kafka , Azure Event Hubs ) Solid understanding of SQL , data modelling , and lakehouse architecture Experience deploying via CI/CD More ❯
Employment Type: Permanent
Salary: £500 - £600 per day
Posted:

Lead Data Engineer

Reading, Berkshire, South East, United Kingdom
Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant AWS or Azure … Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Lead Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, On-Prem, Cloud, ETL, Azure Data Fabric, ADF, Databricks, Azure Data, Delta Lake, Data Lake. Please note that due to a high level of More ❯
Employment Type: Permanent, Work From Home
Salary: £80,000
Posted:

Machine Learning Engineer with Data Engineering expertise (London)

London, UK
ZipRecruiter
in both data engineering and machine learning, with a strong portfolio of relevant projects. Proficiency in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL, and data modeling. Familiarity with More ❯
Employment Type: Full-time
Posted:

Senior Data Engineer / Lead Data Engineer

Manchester, North West, United Kingdom
Woods & Wood Recruitment Ltd
warehouse and data infrastructure to support advanced analytics and reporting needs for a fast-growing organisation. Key Responsibilities: Design, develop, and maintain scalable data pipelines using SQL and Python (PySpark) . Ingest, transform, and curate data from multiple sources into Azure Data Lake and Delta Lake formats. Build and optimize datasets for performance and reliability in Azure Databricks . … to governance policies. Monitor and troubleshoot production jobs and processes. Preferred Skills & Experience: Strong proficiency in SQL for data transformation and performance tuning. Solid experience with Python , ideally using PySpark in Azure Databricks . Hands-on experience with Azure Data Lake Storage Gen2 . Understanding of data warehouse concepts , dimensional modelling , and data architecture . Experience working with Delta More ❯
Employment Type: Permanent
Salary: £65,000
Posted:

Software Engineer

United Kingdom
Altrata Group
The Software Engineer will run build and work on enterprise grade software systems using a modern tech stack including PySpark with Databricks for data engineering tasks, infrastructure as code with AWS CDK and GraphQL. As a Software Engineer, you are expected to work with architects to design clean decoupled solutions; create automated tests in support of continuous delivery; adopt … scientific degree or equivalent professional experience. Some level of professional working experience. More if no relevant degree. OO and functional programming experience, design patterns, SOLID principles. Experience in Python, PySpark and/or SQL is preferred. Experience with scrum, TDD, BDD, Pairing, Pull Requests, Continuous Integration & Delivery. Continuous Integration tools - Github, Azure DevOps, Jenkins or similar. Infrastructure as code More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineering Associate

London, United Kingdom
Metyis AG
experience with Azure services such as Data Factory, Databricks, Synapse (DWH), Azure Functions, and other data analytics tools, including streaming. Experience with Airflow and Kubernetes. Programming skills in Python (PySpark) and scripting languages like Bash. Knowledge of Git, CI/CD operations, and Docker. Basic PowerBI knowledge is a plus. Experience deploying cloud infrastructure is desirable. Understanding of Infrastructure More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Graduate Data Engineer Python Spark SQL

Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom
Hybrid / WFH Options
Client Server Ltd
scientific discipline, backed by minimum A A A grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience with Azure and Data Bricks You're collaborative with excellent More ❯
Employment Type: Full-Time
Salary: £33,000 per annum
Posted:
PySpark
10th Percentile
£49,250
25th Percentile
£62,250
Median
£92,500
75th Percentile
£122,500
90th Percentile
£143,750