3 of 3 Remote Spark SQL Jobs in Tyne and Wear

Data Engineer Python Spark SQL - Fintech

Hiring Organisation
Client Server
Location
Tyne and Wear, UK
Data Engineer (Python Spark SQL) *Newcastle Onsite* to £120k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a partner … minimum A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience with ...

Data Engineer Python Spark SQL - Fintech

Hiring Organisation
Client Server
Location
Newcastle upon Tyne, UK
Data Engineer (Python Spark SQL) *Newcastle Onsite* to £120k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a partner … minimum A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience with ...

Azure Data Engineer - £500 - Hybrid

Hiring Organisation
Tenth Revolution Group
Location
Newcastle upon Tyne, Tyne and Wear, Tyne & Wear, United Kingdom
Employment Type
Contract
Contract Rate
£450 - £550/day
modeling, ETL/ELT development, and collaborative engineering practices. Key Responsibilities * Design, develop, and maintain scalable data pipelines using Azure Databricks (Python, PySpark, SQL). * Build and optimize ETL/ELT workflows that ingest data from various on-prem and cloud-based sources. * Work with Azure services including … Azure Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure SQL, and Event Hub. * Implement data quality validation, monitoring, metadata management, and governance processes. * Collaborate closely with data architects, analysts, and business stakeholders to understand data requirements. * Optimize Databricks clusters, jobs, and runtimes for performance and cost ...