Spark SQL Jobs

1 to 25 of 68 Spark SQL Jobs

Senior Data Engineer (Azure, Spark SQL, Team Lead)

Greater Sheffield Area, United Kingdom
Hybrid / WFH Options
HOK Consulting - Technical Recruitment Consultancy
Job Title: Senior Data Engineer (Azure, Spark SQL, Team Lead) Duration: long-term contract Location: Hybrid (3 days/week from Sheffield) Visa: UK Citizen/ILR/Dependent visa (No Visa sponsorship) We are looking for a Senior Data Engineer with hands-on expertise in … SQL/BigQuery migration , Azure Databricks , and Spark SQL , who also brings team leadership experience and thrives in Agile/SAFe/Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex … BigQuery SQL transformations to Spark SQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements More ❯
Posted:

Senior Data Engineer (Azure, Spark SQL, Team Lead)

handsworth, yorkshire and the humber, united kingdom
Hybrid / WFH Options
HOK Consulting - Technical Recruitment Consultancy
Job Title: Senior Data Engineer (Azure, Spark SQL, Team Lead) Duration: long-term contract Location: Hybrid (3 days/week from Sheffield) Visa: UK Citizen/ILR/Dependent visa (No Visa sponsorship) We are looking for a Senior Data Engineer with hands-on expertise in … SQL/BigQuery migration , Azure Databricks , and Spark SQL , who also brings team leadership experience and thrives in Agile/SAFe/Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex … BigQuery SQL transformations to Spark SQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements More ❯
Posted:

Senior Data Engineer (Azure, Spark SQL, Team Lead) (Greater Sheffield Area)

Sheffield, UK
Hybrid / WFH Options
HOK Consulting - Technical Recruitment Consultancy
Job Title: Senior Data Engineer (Azure, Spark SQL, Team Lead) Duration: long-term contract Location: Hybrid (3 days/week from Sheffield) Visa: UK Citizen/ILR/Dependent visa (No Visa sponsorship) We are looking for a Senior Data Engineer with hands-on expertise in … SQL/BigQuery migration , Azure Databricks , and Spark SQL , who also brings team leadership experience and thrives in Agile/SAFe/Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex … BigQuery SQL transformations to Spark SQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements More ❯
Employment Type: Part-time
Posted:

Senior Data Engineer (Azure, Spark SQL, Team Lead)

Sheffield, England, United Kingdom
Hybrid / WFH Options
HOK Consulting - Technical Recruitment Consultancy
Financial Services Location: Hybrid (3 days/week from Sheffield) We are looking for a Senior Data Engineer with hands-on expertise in SQL/BigQuery migration , Azure Databricks , and Spark SQL , who also brings team leadership experience and thrives in Agile/SAFe … Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex BigQuery SQL transformations to Spark SQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements: Strong SQL skills (BigQuery SQL & Spark SQL), Python, and ETL pipeline development. Experience with Azure and cloud data tools. Familiarity More ❯
Posted:

Graduate Data Engineer Python Spark SQL

Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom
Hybrid / WFH Options
Client Server
Graduate Data Engineer (Python Spark SQL) *Newcastle Onsite* to £33k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a … by minimum A A A grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … a range of events and early finish for drinks on Fridays Apply now to find out more about this Graduate Data Engineer (Python Spark SQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
Employment Type: Permanent, Work From Home
Salary: £30,000
Posted:

Data Engineer

Newcastle upon Tyne, England, United Kingdom
Hybrid / WFH Options
Somerset Bridge Group
Hands-on experience in building ELT pipelines and working with large-scale datasets using Azure Data Factory (ADF) and Databricks. Strong proficiency in SQL (T-SQL, Spark SQL) for data extraction, transformation, and optimisation. Proficiency in Azure Databricks (PySpark, Delta Lake, Spark SQL) for big data processing. Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics. Experience working with Delta Lake for schema evolution, ACID transactions, and time travel in Databricks. Strong Python (PySpark) skills for big data processing and automation. Experience with … Scala (optional but preferred for advanced Spark applications). Experience working with Databricks Workflows & Jobs for data orchestration. Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference. Experience with data modelling techniques to support analytics and reporting. Familiarity with More ❯
Posted:

Data Engineer (f/m/x) (EN) - Hybrid

Dortmund, Nordrhein-Westfalen, Germany
Hybrid / WFH Options
NETCONOMY
Salary: 50.000 - 60.000 € per year Requirements: • 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark • Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, Spark SQL) • Experience with core components of the Databricks … ELT processes, data modeling and techniques, and database systems • Proven experience with at least one major cloud platform (Azure, AWS, or GCP) • Excellent SQL skills for data querying, transformation, and analysis • Excellent communication and collaboration skills in English and German (min. B2 levels) • Ability to work independently as … work hands-on with the Databricks platform, supporting clients in solving complex data challenges. • Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python • Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based More ❯
Employment Type: Permanent
Salary: EUR 50,000 - 60,000 Annual
Posted:

Data Engineer - Pyspark / Palantir

City of London, England, United Kingdom
Whitehall Resources Ltd
independently • Leading a team of Data Engineers and deliver solutions as a team Key skills/knowledge/experience: • Proficient in PySpark, Python, SQL with atleast 5 years of experience • Working experience in Palantir Foundry platform is must • Experience designing and implementing data analytics solutions on enterprise data … platforms and distributed computing (Spark/Hive/Hadoop preferred). • Proven track record of understanding and transforming customer requirements into a best-fit design and architecture. • Demonstrated experience in end-to-end data management, data modelling, and data transformation for analytical use cases. • Proficient in SQL (Spark SQL preferred). • Experience with JavaScript/HTML/CSS a plus. Experience working in a Cloud environment such as Azure or AWS is a plus. • Experience with Scrum/Agile development methodologies. • At least 7 years of experience working with large scale More ❯
Posted:

Data Engineer

Skipton, England, United Kingdom
Hybrid / WFH Options
Skipton
Databricks, Data Factory, Storage, Key Vault Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (Spark SQL) & Python (PySpark) Certifications: Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) You will More ❯
Posted:

Senior Data Engineer

Skipton, England, United Kingdom
Skipton International Ltd
CI/CD tools Key Technology: Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (Spark SQL) & Python (PySpark) Certifications: You will need to be you. Curious about technology and adaptable to new technologies Agile More ❯
Posted:

LeadSoftwareEngineer-SPARK,Java,Python,AWS

London, England, United Kingdom
JPMorgan Chase
architectures that business engineering teams buy into and build their applications around. Required Qualifications, Capabilities, and Skills: Experience across the data lifecycle with Spark-based frameworks for end-to-end ETL, ELT & reporting solutions using key components like Spark SQL & Spark Streaming. … end-to-end engineering experience supported by excellent tooling and automation. Preferred Qualifications, Capabilities, and Skills: Good understanding of the Big Data stack (Spark/Iceberg). Ability to learn new technologies and patterns on the job and apply them effectively. Good understanding of established patterns, such as More ❯
Posted:

Graduate Data Engineer Python Spark SQL

Newcastle upon Tyne, England, United Kingdom
Client Server
Graduate Data Engineer (Python Spark SQL) *Newcastle Onsite* to £33k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a More ❯
Posted:

Product Owner - Data Platform

London, England, United Kingdom
Skipton Building Society
across teams. Key Technologies (awareness of) Azure Databricks, Data Factory, Storage, Key Vault Source control systems, such as Git dbt (Data Build Tool), SQL (Spark SQL), Python (PySpark) Certifications (Ideal) SAFe POPM or Scrum PSP Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure More ❯
Posted:

Databricks Engineer

London, United Kingdom
Tenth Revolution Group
ingestion from various data sources, performing complex transformations, and publishing data to Azure Data Lake or other storage services. Write efficient and standardized Spark SQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline. Automate pipeline orchestration using Databricks Workflows or … ingestion processes to handle structured, semi-structured, and unstructured data from various sources (APIs, databases, file systems). Implement data transformation logic using Spark, ensuring data is cleaned, transformed, and enriched according to business requirements. Leverage Databricks features such as Delta Lake to manage and track changes to … as encryption at rest and in transit, and role-based access control (RBAC) within Azure Databricks and Azure services. Performance Tuning & Optimization: Optimize Spark jobs for performance by tuning configurations, partitioning data, and caching intermediate results to minimize processing time and resource consumption. Continuously monitor and improve pipeline More ❯
Employment Type: Contract
Rate: £400 - £500/day
Posted:

Big Data Developer

Irving, Texas, United States
Motion Recruitment
experience, or equivalent demonstrated through one or a combination 7+ years of software engineering experience - Default Required 3+ years of experience working in Spark, Hadoop and Big Data experience 3+ years of experience working on Spark SQL, Streaming and data frame/dataset API … experience 3+ years of experience working on Spark query tuning and performance optimization Deep understanding of Hadoop/Cloud platforms, HDFS, ETL/ELT process and Unix shell scripting 3+ years of experience working with Relationship Database Management Systems (RDBMS) such as SQL Server, Oracle or … MySQL 3+ years of experience with SQL & NOSQL database integration with Spark (MS SQL server and MongoDB) 2+ years of Agile experience What You Will Be Doing Consult on or participate in moderately complex initiatives and deliverables within Software Engineering and contribute to large More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Data Engineer (Databricks, Python, Sql)

City of London, England, United Kingdom
Toucanberry Tech
Job Title: Data Engineer (Databricks, Python, SQL) Location: Remote Contract Type: Outside IR35 Start Date: ASAP About Us: Toucanberry Tech is a boutique software consultancy delivering high-impact solutions to the financial sector. We work in agile, cross-functional teams to solve complex challenges with speed, autonomy, and … production-ready code . Strong communication skills and the ability to explain technical decisions clearly are key. Key Responsibilities: Develop and maintain dbt SQL models and macros Collaborate with stakeholders to understand and refine modeling requirements Debug and optimize existing models Design ETL/ELT workflows and data … Work within Azure Databricks and follow code-based deployment practices Must-Have Skills: 3 years of experience with Databricks (Lakehouse, Delta Lake, PySpark, Spark SQL) Strong SQL skills (5 years) Experience with Azure, focusing on Databricks Excellent client-facing communication skills Experience deploying Databricks More ❯
Posted:

Senior Data Engineer

Exeter, England, United Kingdom
Hybrid / WFH Options
MBN Solutions
governance techniques Good understanding of Quality and Information Security principles Experience with Azure, ETL Tools such as ADF and Databricks Advanced Database and SQL skills, along with SQL, Python, Pyspark, Spark SQL Strong understanding of data model design and implementation principles Data More ❯
Posted:

Senior Data Engineer (London Area)

London, UK
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
Employment Type: Part-time
Posted:

Senior Data Engineer (City of London)

City of London, Greater London, UK
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
Employment Type: Part-time
Posted:

Senior Data Engineer

London Area, United Kingdom
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
Posted:

Senior Data Engineer

City of London, London, United Kingdom
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
Posted:

Senior Data Engineer

london, south east england, united kingdom
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
Posted:

Senior Data Engineer

london (city of london), south east england, united kingdom
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
Posted:

Data Engineer (f/m/x)

Austria
Hybrid / WFH Options
NETCONOMY GmbH
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components More ❯
Employment Type: Permanent
Salary: EUR Annual
Posted:

Data Engineer (f/m/x)

Wien, Austria
Hybrid / WFH Options
NETCONOMY GmbH
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components More ❯
Employment Type: Permanent
Salary: EUR Annual
Posted:
Spark SQL
10th Percentile
£44,750
25th Percentile
£46,250
Median
£62,000
75th Percentile
£91,250
90th Percentile
£100,250