Remote Permanent PySpark Job Vacancies

1 to 25 of 261 Remote Permanent PySpark Jobs

Data Engineer (f/m/x) (EN) - Hybrid

Dortmund, Nordrhein-Westfalen, Germany
Hybrid / WFH Options
NETCONOMY
Requirements: • 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark • Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, Spark SQL) • Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables • Solid understanding of data warehousing principles, ETL/ELT processes, data … data engineering, and cloud technologies to continuously improve tools and approaches Technologies: AI AWS Azure CI/CD Cloud Databricks DevOps ETL GCP Support Machine Learning Power BI Python PySpark SQL Spark Terraform Unity GameDev Looker SAP More: NETCONOMY has grown over the past 20 years from a startup to a 500-people team working across 10 European locations More ❯
Employment Type: Permanent
Salary: EUR 50,000 - 60,000 Annual
Posted:

Data Engineer (f/m/x)

Austria
Hybrid / WFH Options
NETCONOMY GmbH
Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables Solid understanding of data warehousing principles, ETL/ELT processes, data More ❯
Employment Type: Permanent
Salary: EUR Annual
Posted:

Data Engineer (f/m/x)

Wien, Austria
Hybrid / WFH Options
NETCONOMY GmbH
Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables Solid understanding of data warehousing principles, ETL/ELT processes, data More ❯
Employment Type: Permanent
Salary: EUR Annual
Posted:

Data Engineer (f/m/x)

Graz, Steiermark, Austria
Hybrid / WFH Options
NETCONOMY GmbH
Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables Solid understanding of data warehousing principles, ETL/ELT processes, data More ❯
Employment Type: Permanent
Salary: EUR Annual
Posted:

Senior Data Engineer (Databricks)

London, England, United Kingdom
Hybrid / WFH Options
DATAPAO
and hiring efforts. What does it take to fit the bill? Technical Expertise 5+ years in Data Engineering , focusing on cloud platforms (AWS, Azure, GCP); Proven experience with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); Extensive ETL/ELT and data pipeline orchestration experience (e.g., Databricks Workflows, DLT, Airflow, ADF, Glue, Step Functions); Proficiency in SQL and Python for More ❯
Posted:

Senior Data Engineer (Databricks) - UK

London, England, United Kingdom
Hybrid / WFH Options
Datapao
Expertise You (ideally) have 5+ years of experience in Data Engineering , with a focus on cloud platforms (AWS, Azure, GCP); You have a proven track record working with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); You have extensive experience in ETL/ELT development and data pipeline orchestration (e.g., Databricks Workflows, DLT, Airflow, ADF, Glue, and Step Functions.); You More ❯
Posted:

Senior Data Engineer (Remote)

South East, United Kingdom
Hybrid / WFH Options
Circana
team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … make a significant impact, we encourage you to apply! Job Responsibilities ETL/ELT Pipeline Development: Design, develop, and optimize efficient and scalable ETL/ELT pipelines using Python, PySpark, and Apache Airflow. Implement batch and real-time data processing solutions using Apache Spark. Ensure data quality, governance, and security throughout the data lifecycle. Cloud Data Engineering: Manage and … and documentation. Required profile: Requirements Client facing role so strong communication and collaboration skills are vital Proven experience in data engineering, with hands-on expertise in Azure Data Services, PySpark, Apache Spark, and Apache Airflow. Strong programming skills in Python and SQL, with the ability to write efficient and maintainable code. Deep understanding of Spark internals, including RDDs, DataFrames More ❯
Employment Type: Permanent
Posted:

Senior Data Engineer (Remote)

London, England, United Kingdom
Hybrid / WFH Options
Circana
team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … make a significant impact, we encourage you to apply! Job Responsibilities ETL/ELT Pipeline Development: Design, develop, and optimize efficient and scalable ETL/ELT pipelines using Python, PySpark, and Apache Airflow. Implement batch and real-time data processing solutions using Apache Spark. Ensure data quality, governance, and security throughout the data lifecycle. Cloud Data Engineering: Manage and … version control, and documentation. Requirements Client facing role so strong communication and collaboration skills are vital Proven experience in data engineering, with hands-on expertise in Azure Data Services, PySpark, Apache Spark, and Apache Airflow. Strong programming skills in Python and SQL, with the ability to write efficient and maintainable code. Deep understanding of Spark internals, including RDDs, DataFrames More ❯
Posted:

Junior Data Engineer

Wilmslow, England, United Kingdom
Hybrid / WFH Options
JR United Kingdom
building scalable, reliable data pipelines, managing data infrastructure, and supporting data products across various cloud environments, primarily Azure. Key Responsibilities: Develop end-to-end data pipelines using Python, Databricks, PySpark, and SQL. Integrate data from various sources including APIs, Excel, CSV, JSON, and databases. Manage data lakes, warehouses, and lakehouses within Azure cloud environments. Apply data modelling techniques such More ❯
Posted:

Lead Data Engineer

London, United Kingdom
Hybrid / WFH Options
Scott Logic Ltd
data engineering and reporting, including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured, maintainable systems. Strong communication skills More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

Edinburgh, Scotland, United Kingdom
Hybrid / WFH Options
Scott Logic
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options and finding effective solutions. An More ❯
Posted:

Lead Data Engineer

Leeds, England, United Kingdom
Hybrid / WFH Options
Scott Logic
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options and finding effective solutions. An More ❯
Posted:

Lead Data Engineer

Bristol, England, United Kingdom
Hybrid / WFH Options
Scott Logic
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options and finding effective solutions. An More ❯
Posted:

Lead Data Engineer

London, England, United Kingdom
Hybrid / WFH Options
Scott Logic Ltd
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering. You’re a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and More ❯
Posted:

Senior Data Engineer

London, United Kingdom
Hybrid / WFH Options
Scott Logic Ltd
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured, maintainable systems. Strong communication skills More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineer

London, England, United Kingdom
Hybrid / WFH Options
Locus Robotics
Locus Robotics is a global leader in warehouse automation, delivering unmatched flexibility and unlimited throughput, and actionable intelligence to optimize operations. Powered by LocusONE, an AI-driven platform, our advanced autonomous mobile robots seamlessly integrate into existing warehouse environments to More ❯
Posted:

Senior AWS Data Engineer

London, England, United Kingdom
Hybrid / WFH Options
ZipRecruiter
Experience Proven experience as a Data Engineer working in cloud- environments (AWS ) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding of data modeling, ETL frameworks More ❯
Posted:

Senior Data Engineer

London, England, United Kingdom
Hybrid / WFH Options
Artefact
Social network you want to login/join with: Artefact is a new generation of data service provider, specialising in data consulting and data-driven digital marketing, dedicated to transforming data into business impact across the entire value chain of More ❯
Posted:

Data Engineer

Newcastle upon Tyne, England, United Kingdom
Hybrid / WFH Options
Somerset Bridge Group
with large-scale datasets using Azure Data Factory (ADF) and Databricks. Strong proficiency in SQL (T-SQL, Spark SQL) for data extraction, transformation, and optimisation. Proficiency in Azure Databricks (PySpark, Delta Lake, Spark SQL) for big data processing. Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics. Experience working with Delta Lake for schema … evolution, ACID transactions, and time travel in Databricks. Strong Python (PySpark) skills for big data processing and automation. Experience with Scala (optional but preferred for advanced Spark applications). Experience working with Databricks Workflows & Jobs for data orchestration. Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference. Experience with data More ❯
Posted:

Lead Data Engineer (Remote)

South East, United Kingdom
Hybrid / WFH Options
Circana
UK. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … desire to make a significant impact, we encourage you to apply! Job Responsibilities Data Engineering & Data Pipeline Development Design, develop, and optimize scalable DATA workflows using Python, PySpark, and Airflow Implement real-time and batch data processing using Spark Enforce best practices for data quality, governance, and security throughout the data lifecycle Ensure data availability, reliability and performance through … Implement CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Build and optimize large-scale data processing pipelines using Apache Spark and PySpark Implement data partitioning, caching, and performance tuning for Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning initiatives. Workflow Orchestration More ❯
Employment Type: Permanent
Posted:

Data Architect

London Area, United Kingdom
Hybrid / WFH Options
Osmii
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
Posted:

Data Architect

City of London, London, United Kingdom
Hybrid / WFH Options
Osmii
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
Posted:

Data Architect

South East London, England, United Kingdom
Hybrid / WFH Options
Osmii
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
Posted:

Data Architect

london, south east england, united kingdom
Hybrid / WFH Options
Osmii
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
Posted:

Data Architect

slough, south east england, united kingdom
Hybrid / WFH Options
Osmii
Core Platform Build & Development Hands-on Implementation: Act as a lead engineer in the initial build-out of core data pipelines, ETL/ELT processes, and data models using PySpark, SQL, and Databricks notebooks. Data Ingestion & Integration: Establish scalable data ingestion frameworks from diverse sources (batch and streaming) into the Lakehouse. Performance Optimization: Design and implement solutions for optimal … Extensive experience with Azure data services (e.g., Azure Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL More ❯
Posted:
PySpark
10th Percentile
£50,000
25th Percentile
£63,438
Median
£105,000
75th Percentile
£122,500
90th Percentile
£143,750