3 of 3 Spark SQL Jobs in London

PySpark Developer

Hiring Organisation
DCV Technologies
Location
London, United Kingdom
Employment Type
Contract
Contract Rate
£35 - £55/hour
hands-on, focused on building and optimising large-scale data pipelines, dataflows, semantic models, and lakehouse components. Key Responsibilities Design, build and optimise Spark-based data pipelines for batch and streaming workloads Develop Fabric dataflows, pipelines, and semantic models Implement complex transformations, joins, aggregations and performance tuning Build … translate requirements into scalable technical solutions Troubleshoot and improve reliability, latency and workload performance Essential Skills Strong hands-on experience with PySpark, Spark SQL, Spark Streaming, DataFrames Microsoft Fabric (Fabric Spark jobs, dataflows, pipelines, semantic models) Azure: ADLS, cloud data engineering, notebooks ...

Senior Data Engineer - Azure Databricks/ SC Clearance

Hiring Organisation
Crimson
Location
London, South East, England, United Kingdom
Employment Type
Contractor
Contract Rate
£600 - £620 per day
data sources into Azure Databricks. Optimize pipelines for performance, reliability, and cost, incorporating data quality checks. Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, Delta Lake, Spark SQL, and related services. Apply best practices for development, deployment, and workload optimization. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from relational databases, APIs, and streaming sources using best-practice patterns. Collaborate with API developers for seamless data exchange. Utilize Azure ...

Data Engineer - Azure Databricks/ SC Clearance - Contract

Hiring Organisation
Crimson
Location
London, South East, England, United Kingdom
Employment Type
Contractor
Contract Rate
£500 - £520 per day
pipelines to ingest, transform, and load data from diverse sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, Delta Lake, Spark SQL, and best … practices for Databricks development, optimization, and deployment. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from multiple sources and formats (CSV, JSON, Parquet, Delta) for downstream analytics, dashboards, and reporting. Apply Azure Purview for governance and quality checks. Monitor pipelines, resolve issues, and enhance data quality ...