Apache Spark Jobs in the UK

1 to 25 of 240 Apache Spark Jobs in the UK

Senior Software Engineer

Manchester, Lancashire, United Kingdom
Anaplan Inc
production issues. Optimize applications for performance and responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like Apache Spark, Databricks, Apache Pulsar, Apache Airflow, Temporal, and Apache Flink, sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes … Experience with cloud platforms like AWS, GCP, or Azure. DevOps Tools: Familiarity with containerization (Docker) and infrastructure automation tools like Terraform or Ansible. Real-time Data Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with Apache Spark, Databricks, or similar big data platforms for … processing large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with Apache Flink or other stream processing frameworks is a plus. Desired Skills Asynchronous Programming: Familiarity with asynchronous programming tools like Celery or asyncio. Frontend Knowledge More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Solutions Architect - Big Data and DevOps (f/m/d)

England, United Kingdom
Stackable
robust way possible! Diverse training opportunities and social benefits (e.g. UK pension schema) What do you offer? Strong hands-on experience working with modern Big Data technologies such as Apache Spark, Trino, Apache Kafka, Apache Hadoop, Apache HBase, Apache Nifi, Apache Airflow, Opensearch Proficiency in cloud-native technologies such as containerization and Kubernetes More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Databricks Azure Data Engineer x2 - UK Wide (Hybrid Working)

Nationwide, United Kingdom
Hybrid/Remote Options
Adecco
Engineer with an Azure focus, you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, Apache Spark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. Skills/Experience Design and … build high-performance data pipelines: Utilize Databricks and Apache Spark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Develop and maintain secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to ensure reliable and accurate data. Build and deploy AI/ML models: Integrate Machine … and best practices with a focus on how AI can support you in your delivery work Solid experience as a Data Engineer or similar role. Proven expertise in Databricks, Apache Spark, and data pipeline development and strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake Storage, Databricks and Azure More ❯
Employment Type: Permanent
Salary: £72000 - £80000/annum + Benefits
Posted:

Databricks Data Architectx2 UK Wide Hybrid Working

Nationwide, United Kingdom
Hybrid/Remote Options
Adecco
an Azure and Databrick focus, you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, Apache Spark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. Duties Design and build high … performance data platforms: Utilize Databricks and Apache Spark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Design and oversee the delivery of secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to ensure reliable and accurate data. Abilty to Design, Build and deploy AI/… to ensure successful data platform implementations. Your Skills and Experience Solid experience as a Data Architect with experience in designing, developing and implementing Databricks solutions Proven expertise in Databricks, Apache Spark, and data platforms with a strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake Storage, Databricks, and Azure More ❯
Employment Type: Permanent
Salary: £80000 - £90000/annum + Benefits
Posted:

Senior Data Platform Engineer

Luton, England, United Kingdom
Hybrid/Remote Options
easyJet
Job Accountabilities Develop robust, scalable data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploy machine learning models and algorithms aimed at … indexing, partitioning. Hands-on IaC development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data … e.g. access management, data privacy, handling of sensitive data (e.g. GDPR) Desirable Skills Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Understanding of the challenges faced in the design and development of a streaming data pipeline and the different options for processing unbounded data (pubsub More ❯
Posted:

Senior / Lead Data Engineer

City of London, London, United Kingdom
Sahaj Software
while staying close to the code. Perfect if you want scope for growth without going “post-technical.” What you’ll do Design and build modern data platforms using Databricks, Apache Spark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, Apache Spark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
Posted:

Senior / Lead Data Engineer

London Area, United Kingdom
Sahaj Software
while staying close to the code. Perfect if you want scope for growth without going “post-technical.” What you’ll do Design and build modern data platforms using Databricks, Apache Spark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, Apache Spark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
Posted:

Data Architect

Basildon, England, United Kingdom
Coforge
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based More ❯
Posted:

Data Engineer

City of London, London, England, United Kingdom
Equiniti
in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as Apache Spark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and More ❯
Employment Type: Full-Time
Salary: Competitive salary
Posted:

Data Engineer

London Area, United Kingdom
Hybrid/Remote Options
Omnis Partners
across sectors such as financial services, pharmaceuticals, energy, retail, healthcare, and manufacturing. The Role: Data Engineer (Databricks) We are seeking an experienced Data Engineer with strong expertise in Databricks , Apache Spark, Delta Lake, Python, and SQL to take a lead role in delivering innovative data projects. You will design and build scalable, cloud-based data pipelines on platforms … Apply modern engineering practices including CI/CD and automated testing. What You Bring: Proven experience as a Data Engineer working in cloud environments. Expert-level knowledge of Databricks, Apache Spark, and Delta Lake. Advanced Python and SQL programming skills. Strong understanding of CI/CD pipelines, automated testing, and data governance. Excellent communication and stakeholder engagement skills. More ❯
Posted:

Data Engineer

City of London, London, United Kingdom
Hybrid/Remote Options
Omnis Partners
across sectors such as financial services, pharmaceuticals, energy, retail, healthcare, and manufacturing. The Role: Data Engineer (Databricks) We are seeking an experienced Data Engineer with strong expertise in Databricks , Apache Spark, Delta Lake, Python, and SQL to take a lead role in delivering innovative data projects. You will design and build scalable, cloud-based data pipelines on platforms … Apply modern engineering practices including CI/CD and automated testing. What You Bring: Proven experience as a Data Engineer working in cloud environments. Expert-level knowledge of Databricks, Apache Spark, and Delta Lake. Advanced Python and SQL programming skills. Strong understanding of CI/CD pipelines, automated testing, and data governance. Excellent communication and stakeholder engagement skills. More ❯
Posted:

Scala Developer

Northampton, England, United Kingdom
Capgemini
us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. YOUR ROLE We are looking for a skilled Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. YOUR PROFILE Develop, optimize, and maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross … functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and reliability. Write clean, maintainable, and well-documented code. Participate in code reviews, design discussions, and agile ceremonies. Implement data quality and governance best practices. Stay updated with More ❯
Posted:

Senior AWS Engineer

London, United Kingdom
Adroit People Ltd
Greetings! Adroit People is currently hiring Title: Senior AWS Data Engineer Location: London, UK Work Mode: Hybrid-3 DAYS/WEEK Duration: 12 Months FTC Keywords: AWS,PYTHON,APACHE,SPARK,ETL Job Spec: We are building the next-generation data platform at FTSE Russell and we want you to shape it with us. Your role will involve: Designing … and developing scalable, testable data pipelines using Python and Apache Spark Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing Contributing to the development of a lakehouse architecture using Apache Iceberg Collaborating with business teams to translate requirements More ❯
Employment Type: Permanent
Posted:

Senior Data Engineer

Glasgow, Scotland, United Kingdom
Lorien
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: • Lead the migration of existing AWS-based data pipelines to Databricks. • Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. • Optimize performance and cost-efficiency of Databricks workloads. • Develop and … best practices for data governance, security, and access control within Databricks. • Provide technical mentorship and guidance to junior engineers. Must-Have Skills: • Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). • Proven track record of building and optimizing data pipelines in cloud environments. • Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena More ❯
Posted:

Sr.Databricks Engineer (AWS)

Glasgow, Lanarkshire, Scotland, United Kingdom
eTeam Inc
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and … best practices for data governance, security, and access control within Databricks. Provide technical mentorship and guidance to junior engineers. Must-Have Skills: Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). Proven track record of building and optimizing data pipelines in cloud environments. Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena More ❯
Employment Type: Contractor
Rate: £350 - £400 per day
Posted:

Senior Databricks engineer

Glasgow, Scotland, United Kingdom
Hybrid/Remote Options
Undisclosed
data solutions using the Databricks platform. Key Skills/requirements Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and … best practices for data governance, security, and access control within Databricks. Provide technical mentorship and guidance to junior engineers. Must-Have Skills: Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). Proven track record of building and optimizing data pipelines in cloud environments. Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena More ❯
Posted:

Senior Databricks Engineer CGEMJP

Glasgow, Lanarkshire, United Kingdom
Hybrid/Remote Options
Experis IT
data solutions using the Databricks platform. Key Skills/requirements Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and … best practices for data governance, security, and access control within Databricks. Provide technical mentorship and guidance to junior engineers. Must-Have Skills: Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). Proven track record of building and optimizing data pipelines in cloud environments. Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena More ❯
Employment Type: Contract
Rate: GBP Daily
Posted:

Databricks Engineer

Glasgow, Scotland, United Kingdom
Capgemini
optimizing scalable data solutions using the Databricks platform. YOUR PROFILE Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and … within Databricks. Provide technical mentorship and guidance to junior engineers Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and More ❯
Posted:

Data Engineer - ETL

Southam, England, United Kingdom
Electronic Arts (EA)
data security, privacy, and compliance frameworks ● Exposure to machine learning pipelines, MLOps, or AI-driven data products ● Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark ● Exposure to AI/ML concepts and collaboration with data science or AI teams. ● Experience integrating data solutions with AI/ML platforms or supporting AI-driven analytics More ❯
Posted:

Data Engineer - ETL

Guildford, England, United Kingdom
Electronic Arts (EA)
data security, privacy, and compliance frameworks ● Exposure to machine learning pipelines, MLOps, or AI-driven data products ● Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark ● Exposure to AI/ML concepts and collaboration with data science or AI teams. ● Experience integrating data solutions with AI/ML platforms or supporting AI-driven analytics More ❯
Posted:

Senior Data Engineer

Greater Cardiff Area, United Kingdom
Yolk Recruitment Ltd
understanding of data modelling, warehousing, and performance optimisation. Proven experience with cloud platforms (AWS, Azure, or GCP) and their data services. Hands-on experience with big data frameworks (e.g. Apache Spark, Hadoop). Strong knowledge of data governance, security, and compliance. Ability to lead technical projects and mentor junior engineers. Excellent problem-solving skills and experience in agile More ❯
Posted:

Senior Data Engineer

Cardiff, South Glamorgan, Wales, United Kingdom
Yolk Recruitment
understanding of data modelling, warehousing, and performance optimisation. Proven experience with cloud platforms (AWS, Azure, or GCP) and their data services. Hands-on experience with big data frameworks (e.g. Apache Spark, Hadoop). Strong knowledge of data governance, security, and compliance. Ability to lead technical projects and mentor junior engineers. Excellent problem-solving skills and experience in agile More ❯
Employment Type: Permanent
Salary: £75,000
Posted:

Data & Analytics Practice:-Data Architect role- Junior level

England, United Kingdom
Infosys Consulting
modelling tools, data warehousing, ETL processes, and data integration techniques. Experience with at least one cloud data platform (e.g. AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow etc Good knowledge of stream and batch processing solutions like Apache Flink, Apache Kafka Good knowledge of log management, monitoring, and analytics solutions like Splunk, Elastic Stack, New Relic etc Given that this is just a short snapshot of the role we encourage you to apply even if you don't meet all the requirements listed above. We are looking for individuals who strive to make an impact and More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineer

Greater Cardiff Area, United Kingdom
Yolk Recruitment Ltd
pipelines and ETL processes. Proficiency in Python. Experience with cloud platforms (AWS, Azure, or GCP). Knowledge of data modelling, warehousing, and optimisation. Familiarity with big data frameworks (e.g. Apache Spark, Hadoop). Understanding of data governance, security, and compliance best practices. Strong problem-solving skills and experience working in agile environments. Desirable: Experience with Docker/Kubernetes More ❯
Posted:

Data Engineer

Cardiff, South Glamorgan, Wales, United Kingdom
Yolk Recruitment
pipelines and ETL processes. Proficiency in Python. Experience with cloud platforms (AWS, Azure, or GCP). Knowledge of data modelling, warehousing, and optimisation. Familiarity with big data frameworks (e.g. Apache Spark, Hadoop). Understanding of data governance, security, and compliance best practices. Strong problem-solving skills and experience working in agile environments. Desirable: Experience with Docker/Kubernetes More ❯
Employment Type: Permanent
Salary: £50,000
Posted:
Apache Spark
10th Percentile
£55,125
25th Percentile
£65,000
Median
£80,500
75th Percentile
£111,250
90th Percentile
£140,000