Spark SQL Job Vacancies

1 to 25 of 60 Spark SQL Jobs

Senior Data Engineer (Azure, Spark SQL, Team Lead)

Sheffield, England, United Kingdom
Hybrid / WFH Options
HOK Consulting - Technical Recruitment Consultancy
Financial Services Location: Hybrid (3 days/week from Sheffield) We are looking for a Senior Data Engineer with hands-on expertise in SQL/BigQuery migration , Azure Databricks , and Spark SQL , who also brings team leadership experience and thrives in Agile/SAFe … Scrum environments. Key Responsibilities: Lead and contribute to a small Agile team working on a cross-cloud data migration project. Migrate complex BigQuery SQL transformations to Spark SQL on Azure. Build & execute ETL workflows using Azure Databricks and Python. Drive automation of SQL workflows and artefact migration across cloud providers. Collaborate with developers, POs, and stakeholders on quality delivery and performance optimization. Key Requirements: Strong SQL skills (BigQuery SQL & Spark SQL), Python, and ETL pipeline development. Experience with Azure and cloud data tools. Familiarity More ❯
Posted:

Graduate Data Engineer Python Spark SQL

Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom
Hybrid / WFH Options
Client Server
Graduate Data Engineer (Python Spark SQL) *Newcastle Onsite* to £33k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a … by minimum A A A grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … a range of events and early finish for drinks on Fridays Apply now to find out more about this Graduate Data Engineer (Python Spark SQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
Employment Type: Permanent, Work From Home
Salary: £30,000
Posted:

Data Engineer

Newcastle upon Tyne, England, United Kingdom
Hybrid / WFH Options
Somerset Bridge
Hands-on experience in building ELT pipelines and working with large-scale datasets using Azure Data Factory (ADF) and Databricks. Strong proficiency in SQL (T-SQL, Spark SQL) for data extraction, transformation, and optimisation. Proficiency in Azure Databricks (PySpark, Delta Lake, Spark SQL) for big data processing. Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics. Experience working with Delta Lake for schema evolution, ACID transactions, and time travel in Databricks. Strong Python (PySpark) skills for big data processing and automation. Experience with … Scala (optional but preferred for advanced Spark applications). Experience working with Databricks Workflows & Jobs for data orchestration. Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference. Experience with data modelling techniques to support analytics and reporting. Familiarity with More ❯
Posted:

Data Engineer (f/m/x) (EN) - Hybrid

Dortmund, Nordrhein-Westfalen, Germany
Hybrid / WFH Options
NETCONOMY
Salary: 50.000 - 60.000 € per year Requirements: • 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark • Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, Spark SQL) • Experience with core components of the Databricks … ELT processes, data modeling and techniques, and database systems • Proven experience with at least one major cloud platform (Azure, AWS, or GCP) • Excellent SQL skills for data querying, transformation, and analysis • Excellent communication and collaboration skills in English and German (min. B2 levels) • Ability to work independently as … work hands-on with the Databricks platform, supporting clients in solving complex data challenges. • Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python • Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based More ❯
Employment Type: Permanent
Salary: EUR 50,000 - 60,000 Annual
Posted:

Data Engineer - Pyspark / Palantir

City of London, England, United Kingdom
Whitehall Resources Ltd
independently • Leading a team of Data Engineers and deliver solutions as a team Key skills/knowledge/experience: • Proficient in PySpark, Python, SQL with atleast 5 years of experience • Working experience in Palantir Foundry platform is must • Experience designing and implementing data analytics solutions on enterprise data … platforms and distributed computing (Spark/Hive/Hadoop preferred). • Proven track record of understanding and transforming customer requirements into a best-fit design and architecture. • Demonstrated experience in end-to-end data management, data modelling, and data transformation for analytical use cases. • Proficient in SQL (Spark SQL preferred). • Experience with JavaScript/HTML/CSS a plus. Experience working in a Cloud environment such as Azure or AWS is a plus. • Experience with Scrum/Agile development methodologies. • At least 7 years of experience working with large scale More ❯
Posted:

Data Engineer

Skipton, England, United Kingdom
Hybrid / WFH Options
Skipton
Databricks, Data Factory, Storage, Key Vault Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (Spark SQL) & Python (PySpark) Certifications: Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) You will More ❯
Posted:

Senior Data Engineer

Skipton, England, United Kingdom
Skipton International Ltd
CI/CD tools Key Technology: Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (Spark SQL) & Python (PySpark) Certifications: You will need to be you. Curious about technology and adaptable to new technologies Agile More ❯
Posted:

Senior Data Engineer

London, England, United Kingdom
Skipton Building Society
Understanding of Agile methodologies, CI/CD tools, and full software development lifecycle. Proficiency with Azure Databricks, Data Factory, Storage, Key Vault, Git, SQL (Spark SQL), and Python (PySpark). Certifications: Azure Fundamentals (AZ-900), Azure Data Fundamentals (DP-900). Curiosity about technology More ❯
Posted:

Data Engineer

London, England, United Kingdom
Hybrid / WFH Options
Skipton Building Society
experience Experience with CI/CD tools Key Technologies: Azure Databricks, Data Factory, Storage, Key Vault Source control experience (e.g., Git) Proficiency in SQL (Spark SQL) and Python (PySpark) Certifications: Microsoft Certified: Azure Fundamentals (AZ-900), Azure Data Fundamentals (DP-900) Curiosity about technology More ❯
Posted:

Product Owner - Data Platform

London, England, United Kingdom
Skipton Building Society
across teams. Key Technologies (awareness of) Azure Databricks, Data Factory, Storage, Key Vault Source control systems, such as Git dbt (Data Build Tool), SQL (Spark SQL), Python (PySpark) Certifications (Ideal) SAFe POPM or Scrum PSP Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure More ❯
Posted:

Databricks Engineer

London, United Kingdom
Tenth Revolution Group
ingestion from various data sources, performing complex transformations, and publishing data to Azure Data Lake or other storage services. Write efficient and standardized Spark SQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline. Automate pipeline orchestration using Databricks Workflows or … ingestion processes to handle structured, semi-structured, and unstructured data from various sources (APIs, databases, file systems). Implement data transformation logic using Spark, ensuring data is cleaned, transformed, and enriched according to business requirements. Leverage Databricks features such as Delta Lake to manage and track changes to … as encryption at rest and in transit, and role-based access control (RBAC) within Azure Databricks and Azure services. Performance Tuning & Optimization: Optimize Spark jobs for performance by tuning configurations, partitioning data, and caching intermediate results to minimize processing time and resource consumption. Continuously monitor and improve pipeline More ❯
Employment Type: Contract
Rate: £400 - £500/day
Posted:

Big Data Developer

Irving, Texas, United States
Motion Recruitment
experience, or equivalent demonstrated through one or a combination 7+ years of software engineering experience - Default Required 3+ years of experience working in Spark, Hadoop and Big Data experience 3+ years of experience working on Spark SQL, Streaming and data frame/dataset API … experience 3+ years of experience working on Spark query tuning and performance optimization Deep understanding of Hadoop/Cloud platforms, HDFS, ETL/ELT process and Unix shell scripting 3+ years of experience working with Relationship Database Management Systems (RDBMS) such as SQL Server, Oracle or … MySQL 3+ years of experience with SQL & NOSQL database integration with Spark (MS SQL server and MongoDB) 2+ years of Agile experience What You Will Be Doing Consult on or participate in moderately complex initiatives and deliverables within Software Engineering and contribute to large More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Senior Data Engineer

Exeter, England, United Kingdom
Hybrid / WFH Options
MBN Solutions
governance techniques Good understanding of Quality and Information Security principles Experience with Azure, ETL Tools such as ADF and Databricks Advanced Database and SQL skills, along with SQL, Python, Pyspark, Spark SQL Strong understanding of data model design and implementation principles Data More ❯
Posted:

Senior Data Engineer

Slough, Berkshire, UK
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
Employment Type: Full-time
Posted:

Senior Data Engineer

London Area, United Kingdom
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
Posted:

Senior Data Engineer

City of London, London, United Kingdom
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
Posted:

Principal Data Engineer

London, England, United Kingdom
Epam
Requirements Minimum of 8 years of experience in data engineering At least 5 years of hands-on experience with Azure data services (Apache Spark, Azure Data Factory, Synapse Analytics, RDBMS such … as SQL Server) Proven leadership and management experience in data engineering teams Proficiency in PySpark, Python (with Pandas), T-SQL, SparkSQL, and experience with CI/CD pipelines Strong understanding of data modeling, ETL processes, and data warehousing concepts Knowledge of version control systems like Git … deadlines Azure certifications such as Microsoft Certified: Azure Data Engineer Associate or Azure Solutions Architect Nice to Have Experience with Scala for Apache Spark Knowledge of other cloud platforms like AWS or GCP Our Benefits Include Group pension plan, life assurance, income protection, and critical illness cover Private More ❯
Posted:

Data Engineer (f/m/x)

Austria
Hybrid / WFH Options
NETCONOMY GmbH
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components More ❯
Employment Type: Permanent
Salary: EUR Annual
Posted:

Data Engineer (f/m/x)

Wien, Austria
Hybrid / WFH Options
NETCONOMY GmbH
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components More ❯
Employment Type: Permanent
Salary: EUR Annual
Posted:

Data Engineer (f/m/x)

Graz, Steiermark, Austria
Hybrid / WFH Options
NETCONOMY GmbH
platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … data lakes and warehouses Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions Contributing to data modeling and architecture decisions to ensure consistency … continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components More ❯
Employment Type: Permanent
Salary: EUR Annual
Posted:

Datawarehouse Developer

London, United Kingdom
Hybrid / WFH Options
Candour Solutions
The role This role sits within the Group Enterprise Systems (GES) Technology team. The ideal candidate is an experienced Microsoft data warehouse developer (SQL Server, SSIS, SSAS) capable of working independently and within a team to deliver enterprise-class data warehouse solutions and analytics platforms. The role involves … pipelines supporting BI and analytics use cases, ingesting, transforming, and loading data from multiple sources, structured and unstructured. Utilise Enterprise-grade technology, primarily SQL Server 2019 and potentially Azure technologies, and explore other solutions where appropriate. Develop patterns, best practices, and standardized data pipelines to ensure consistency across … the organisation. Essential Core Technical Experience 5 to 10+ years of experience in SQL Server data warehouse or data provisioning architectures. Advanced SQL query writing and stored procedure experience. Experience developing ETL solutions in SQL Server, including SSIS & T-SQL. Experience with Microsoft BI More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Analyst - Quality Expert

Belgium
Hybrid / WFH Options
LACO
data quality projects. You have an extended experience with Business Glossary, Data Catalog, Data Lineage or Reporting Governance You know all about the SQL You have experience in Power BI, including DAX, data modeling techniques, minimum Star Schemas (Kimball) and others are nice to have. You get even … You have a good Experience with master data management You are familiar with data quality tool like Azure Purview (Collibra, Informatica, Soda) Python, Spark, PySpark, Spark SQL Other Security protocols like CLS (column Level Security, Object Level Security) You are fluent in Dutch and More ❯
Employment Type: Permanent
Salary: EUR Annual
Posted:

Senior/Lead Data Engineer

London, England, United Kingdom
Cognizant
level. You must be SC cleared to be considered for the role. Tasks and Responsibilities: Engineering: Ingestion configuration. Write python/pyspark and spark SQL code for validation/curation in notebook. Create data integration test cases. Implement or amend worker pipelines. Implement Data validation/… Web App. Good knowledge in real-time streaming applications, preferably with experience in Kafka real-time messaging or Azure Functions, Azure Service Bus. Spark processing and performance tuning. File formats partitioning (e.g., Parquet, JSON, XML, CSV). Azure DevOps/GitHub. Hands-on experience in at least one More ❯
Posted:

Machine Learning Operations (ML Ops) Engineer

Ipswich, England, United Kingdom
Northampton Business Directory
via automated ML Ops. Ideally, you’ll also be technically skilled in most or all of the below: - Expert knowledge of Python and SQL, inc. the following libraries: Numpy, Pandas, PySpark and Spark SQL - Expert knowledge of ML Ops frameworks in the following categories More ❯
Posted:

Machine Learning Operations (ML Ops) Engineer

London, England, United Kingdom
Northampton Business Directory
via automated ML Ops. Ideally, you’ll also be technically skilled in most or all of the below: - Expert knowledge of Python and SQL, inc. the following libraries: Numpy, Pandas, PySpark and Spark SQL - Expert knowledge of ML Ops frameworks in the following categories More ❯
Posted:
Spark SQL
10th Percentile
£44,750
25th Percentile
£46,250
Median
£62,000
75th Percentile
£91,250
90th Percentile
£100,250