Spark SQL Jobs in London

7 of 7 Spark SQL Jobs in London

Senior Data Engineer

London Area, United Kingdom
Mastek
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
Posted:

Senior Data Engineer

City of London, London, United Kingdom
Mastek
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
Posted:

Senior Data Engineer

City of London, London, United Kingdom
Hybrid/Remote Options
Tenth Revolution Group
days annual leave plus bank holidays Performance-related bonus Private medical care And many more Role and Responsibilities Develop and maintain AWS-based data pipelines using Python, PySpark, Spark SQL, AWS Glue, Step Functions, Lambda, EMR, and Redshift. Design, implement, and optimise data architecture for scalability, performance, and security. Work closely with business and technical stakeholders … progress reporting, and delivery of project milestones. Engage in client workshops, gather feedback, and provide technical guidance. Required Skills & Experience Strong hands-on experience in Python, PySpark, and Spark SQL. Proven expertise in AWS Glue, Step Functions, Lambda, EMR, and Redshift. Solid understanding of cloud architecture, security, and scalability best practices. Experience designing and implementing CI/CD More ❯
Posted:

Senior Data Engineer

London Area, United Kingdom
Hybrid/Remote Options
Tenth Revolution Group
days annual leave plus bank holidays Performance-related bonus Private medical care And many more Role and Responsibilities Develop and maintain AWS-based data pipelines using Python, PySpark, Spark SQL, AWS Glue, Step Functions, Lambda, EMR, and Redshift. Design, implement, and optimise data architecture for scalability, performance, and security. Work closely with business and technical stakeholders … progress reporting, and delivery of project milestones. Engage in client workshops, gather feedback, and provide technical guidance. Required Skills & Experience Strong hands-on experience in Python, PySpark, and Spark SQL. Proven expertise in AWS Glue, Step Functions, Lambda, EMR, and Redshift. Solid understanding of cloud architecture, security, and scalability best practices. Experience designing and implementing CI/CD More ❯
Posted:

Pyspark Developer

London, United Kingdom
Queen Square Recruitment Ltd
analysts and stakeholders to translate business needs into technical solutions. Maintain clear documentation and contribute to internal best practices. Requirements Strong hands-on experience with PySpark (RDDs, DataFrames, Spark SQL). Proven ability to build and optimise ETL pipelines and dataflows. Familiar with Microsoft Fabric or similar lakehouse/data platform environments. Experience with Git, CI More ❯
Employment Type: Contract
Rate: £400 - £450/day
Posted:

Senior Data Engineer

London, United Kingdom
Hybrid/Remote Options
Cognizant
involves structuring analytical solutions that address business objectives and problem solving. We are looking for hands-on experience in writing code for AWS Glue in Python, PySpark, and Spark SQL. The successful candidate will translate stated or implied client needs into researchable hypotheses, facilitate client working sessions, and be involved in recurring project status meetings. You will develop … relevant data points Create solution hypothesis and get client buy in, discuss and align on end objective, staffing need, timelines and budget Nice to have Hive Pig No-SQL database More ❯
Employment Type: Permanent, Work From Home
Posted:

Developer (PySpark + Fabric)

London, United Kingdom
Stackstudio Digital Ltd
business stakeholders to translate requirements into technical solutions. Create, maintain, and update documentation and internal knowledge repositories. Your Profile Essential Skills/Knowledge/Experience Ability to write Spark code for large-scale data processing, including RDDs, DataFrames, and Spark SQL. Hands-on experience with lakehouses, dataflows, pipelines, and semantic models. Ability to build ETL workflows. More ❯
Employment Type: Contract
Rate: From £475 to £500 per day
Posted:
Spark SQL
London
Median
£80,000