Apache Spark Jobs in the UK excluding London

1 to 25 of 218 Apache Spark Jobs in the UK excluding London

Data Engineer - UK Perm - Manchester Hrbrid

Manchester, North West, United Kingdom
Hybrid / WFH Options
INFUSED SOLUTIONS LIMITED
culture. Key Responsibilities Design, build, and maintain scalable data solutions to support business objectives. Work with Microsoft Fabric to develop robust data pipelines. Utilise Apache Spark and the Spark API to handle large-scale data processing. Contribute to data strategy, governance, and architecture best practices. Identify and … approaches. Collaborate with cross-functional teams to deliver projects on time . Key Requirements ? Hands-on experience with Microsoft Fabric . ? Strong expertise in Apache Spark and Spark API . ? Knowledge of data architecture, engineering best practices, and governance . ? DP-600 & DP-700 certifications are highly More ❯
Employment Type: Permanent, Work From Home
Salary: £70,000
Posted:

Data Engineer

Yeovil, Somerset, United Kingdom
Artis Recruitment
years of hands-on experience with big data tools and frameworks. Technical Skills: Proficiency in SQL, Python, and data pipeline tools such as Apache Kafka, Apache Spark, or AWS Glue. Problem-Solving: Strong analytical skills with the ability to troubleshoot and resolve data issues. Communication: Excellent communication More ❯
Employment Type: Permanent
Salary: £50000 - £55000/annum + Annual Bonus & Excellent Benefits
Posted:

Cloud Data Engineer

Bracknell, Berkshire, United Kingdom
Icloudxcel
Databricks. Solid understanding of ETL processes , data modeling, and data warehousing. Familiarity with SQL and relational databases. Knowledge of big data technologies , such as Spark, Hadoop, or Kafka, is a plus. Strong problem-solving skills and the ability to work in a collaborative team environment. Excellent verbal and written More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Backend Engineer - Product Monetization (Remote - United Kingdom)

Birmingham, Staffordshire, United Kingdom
Hybrid / WFH Options
Yelp USA
to the experimentation and development of new ad products at Yelp. Design, build, and maintain efficient data pipelines using large-scale processing tools like Apache Spark to transform ad-related data. Manage high-volume, real-time data streams using Apache Kafka and process them with frameworks like … Apache Flink. Estimate timelines for projects, feature enhancements, and bug fixes. Work with large-scale data storage solutions, including Apache Cassandra and various data lake systems. Collaborate with cross-functional teams, including engineers, product managers and data scientists, to understand business requirements and translate them into effective system … a proactive approach to identifying opportunities and recommending scalable, creative solutions. Exposure to some of the following technologies: Python, AWS Redshift, AWS Athena/Apache Presto, Big Data technologies (e.g S3, Hadoop, Hive, Spark, Flink, Kafka etc), NoSQL systems like Cassandra, DBT is nice to have. What you More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Software Engineer (DV Security Clearance)

Gloucester, Gloucestershire, South West
Hybrid / WFH Options
CGI
IaC), automation & configuration management • Ansible (plus Puppet, Saltstack), Terraform, CloudFormation • NodeJS, REACT/MaterialUI (plus Angular), Python, JavaScript • Big data processing and analysis, e.g. Apache Hadoop (CDH), Apache Spark • RedHat Enterprise Linux, CentOS, Debian or Ubuntu. • Java 8, Spring framework (preferably Spring boot), AMQP - RabbitMQ, • Open source More ❯
Employment Type: Permanent
Posted:

DataOps Engineer-1

Manchester, Lancashire, United Kingdom
Hybrid / WFH Options
Smart DCC
you be doing? Design and implement efficient ETL processes for data extraction, transformation, and loading. Build real-time data processing pipelines using platforms like Apache Kafka or cloud-native tools. Optimize batch processing workflows with tools like Apache Spark and Flink for scalable performance. Infrastructure Automation: Implement … Integrate cloud-based data services with data lakes and warehouses. Build and automate CI/CD pipelines with Jenkins, GitLab CI/CD, or Apache Airflow. Develop automated test suites for data pipelines, ensuring data quality and transformation integrity. Monitoring & Performance Optimization: Monitor data pipelines with tools like Prometheus More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Software Engineer (DV Security Clearance)

Gloucester, Gloucestershire, United Kingdom
ENGINEERINGUK
IaC), automation & configuration management Ansible (plus Puppet, Saltstack), Terraform, CloudFormation NodeJS, REACT/MaterialUI (plus Angular), Python, JavaScript Big data processing and analysis, e.g. Apache Hadoop (CDH), Apache Spark RedHat Enterprise Linux, CentOS, Debian or Ubuntu Java 8, Spring framework (preferably Spring boot), AMQP - RabbitMQ Open source More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Software Engineer (DV Security Clearance)

Manchester, North West
CGI
IaC), automation & configuration management; • Ansible (plus Puppet, Saltstack), Terraform, CloudFormation; • NodeJS, REACT/MaterialUI (plus Angular), Python, JavaScript; • Big data processing and analysis, e.g. Apache Hadoop (CDH), Apache Spark; • RedHat Enterprise Linux, CentOS, Debian or Ubuntu. • Java 8, Spring framework (preferably Spring boot), AMQP - RabbitMQ, • Open source More ❯
Employment Type: Permanent
Posted:

AWS Technical Lead - Data & Analytics (USA)

Grimsby, Lincolnshire, United Kingdom
Adastra
to Have: AWS Certified Data Engineer, or AWS Certified Data Analytics, or AWS Certified Solutions Architect Experience with big data tools and technologies like Apache Spark, Hadoop, and Kafka Knowledge of CI/CD pipelines and automation tools such as Jenkins or GitLab CI About Adastra For more More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Head of Platform Architecture

london, south east england, united kingdom
Intelix.AI
e.g., Refinitiv, Bloomberg). Data Platforms: Warehouses: Snowflake, Google BigQuery, or Amazon Redshift. Analytics: Tableau, Power BI, or Looker for client reporting. Big Data: Apache Spark or Hadoop for large-scale processing. AI/ML: TensorFlow or Databricks for predictive analytics. Integration Technologies: API Management: Apigee, AWS API More ❯
Posted:

Senior Data Engineer

london, south east england, united kingdom
Montash
and contribute to code reviews and best practices Skills & Experience Strong expertise in Python and SQL for data engineering Hands-on experience with Databricks, Spark, Delta Lake, Delta Live Tables Experience in batch and real-time data processing Proficiency with cloud platforms (AWS, Azure, Databricks) Solid understanding of data More ❯
Posted:

Enterprise Data & Analytics Platforms Director

Slough, Berkshire, United Kingdom
Mars, Incorporated and its Affiliates
platform management roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Scientist

london, south east england, united kingdom
Hybrid / WFH Options
Careerwise
Qualifications: Master's or Ph.D. degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or related fields. Proven experience in Databricks and its ecosystem (Spark, Delta Lake, MLflow, etc.). Strong proficiency in Python and R for data analysis, machine learning, and data visualization. In-depth knowledge of cloud … BigQuery, Redshift, Data Lakes). Expertise in SQL for querying large datasets and optimizing performance. Experience working with big data technologies such as Hadoop, Apache Spark, and other distributed computing frameworks. Solid understanding of machine learning algorithms, data preprocessing, model tuning, and evaluation. Experience in working with LLM More ❯
Posted:

Senior Data Engineer - Data Infrastructure and Architecture: C-4 Analytics

Wakefield, Yorkshire, United Kingdom
Hybrid / WFH Options
Flippa.com
Terraform, Flask, Pandas, FastAPI, Dagster, GraphQL, SQLAlchemy, GitLab, Athena. Your Trusted Companions : Docker, Snowflake, MongoDB, Relational Databases (eg MySQL, PostgreSQL), Dagster, Airflow/Luigi, Spark, Kubernetes. Your AWS Kingdom : Lambda, Redshift, EC2, ELB, IAM, RDS, Route53, S3-the building blocks of cloud mastery. Your Philosophy : Continuous integration/deployments More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

london, south east england, united kingdom
Mentmore
practices to improve data engineering processes. Experience Required: Developing data processing pipelines in python and SQL for Databricks including many of the following technologies: Spark, Delta, Delta Live Tables, PyTest, Great Expectations (or similar) and Jobs. Developing data pipelines for batch and stream processing and analytics. Building data pipelines More ❯
Posted:

Data Engineer - Databricks

City, Edinburgh, United Kingdom
Dufrain
Intelligence platform, either on Azure or AWS. Good working knowledge of Databricks components: DeltaLake, Unity Catalog, ML Flow, etc. Expertise in SQL, Python and Spark (Scala or Python). Experience working with relational SQL databases either on premises or in the cloud. Experience delivering multiple solutions using key techniques More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer - MS Azure

london, south east england, united kingdom
Hybrid / WFH Options
DATAHEAD
ensure high availability and accessibility. Experience & Skills : Strong experience in data engineering. At least some commercial hands-on experience with Azure data services (e.g., Apache Spark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such More ❯
Posted:

Data Engineer

Coalville, Leicestershire, East Midlands, United Kingdom
Hybrid / WFH Options
Ibstock PLC
and BI solutions. Ensure data accuracy, integrity, and consistency across the data platform. Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development More ❯
Employment Type: Permanent, Work From Home
Posted:

Senior DataOps Engineer

Coventry, Warwickshire, United Kingdom
Coventry Building Society
AWS (S3, Glue, Redshift, SageMaker) or other cloud platforms. Familiarity with Docker, Terraform, GitHub Actions, and Vault for managing secrets. Proficiency in SQL, Python, Spark, or Scala to work with data. Experience with databases used in Data Warehousing, Data Lakes, and Lakehouse setups, including both structured and unstructured data. More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineer Python Spark SQL

Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom
Hybrid / WFH Options
Client Server
Data Engineer (Python Spark SQL) *Newcastle Onsite* to £70k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a partner and massive growth … scientific discipline, backed by minimum A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have … will earn a competitive salary (to £70k) plus significant bonus and benefits package. Apply now to find out more about this Data Engineer (Python Spark SQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. We're More ❯
Employment Type: Permanent, Work From Home
Salary: £70,000
Posted:

Lead AI Engineer - AI Innovation Team - GenAI Developer - Azure

london, south east england, united kingdom
Hybrid / WFH Options
Aventis Solutions
services experience is desired but not essential. API development (FastAPI, Flask) Tech stack : Azure, Python, Databricks, Azure DevOps, ChatGPT, Groq, Cursor AI, JavaScript, SQL, Apache Spark, Kafka, Airflow, Azure ML, Docker, Kubernetes and many more. Role Overview: We are looking for someone who is as comfortable developing AI More ❯
Posted:

ML Engineering Lead

Manchester, England, United Kingdom
talego
lifecycle management, including data pipelines, feature engineering, and model serving. Knowledge of MLOps practices, including versioning, monitoring, and automation. Familiarity with big data technologies (Spark, Hadoop, Databricks) is a plus. Strong problem-solving skills and ability to translate business needs into ML solutions. Excellent communication and leadership skills. Why More ❯
Posted:

Data Engineer

london, south east england, united kingdom
Digisourced
independently Experience in working with data visualization tools Experience in GCP tools – Cloud Function, Dataflow, Dataproc and Bigquery Experience in data processing framework – Beam, Spark, Hive, Flink GCP data engineering certification is a merit Have hands on experience in Analytical tools such as powerBI or similar visualization tools Exhibit More ❯
Posted:

Customer Data Analytics Lead

london, south east england, united kingdom
Montash
ll Bring 5+ years in data/analytics engineering, including 2+ years in a leadership or mentoring role. Strong hands-on expertise in Databricks , Spark , Python , PySpark , and Delta Live Tables . Experience designing and delivering scalable data pipelines and streaming data processing (e.g., Kafka , AWS Kinesis , or Azure More ❯
Posted:

Senior Data Engineer

Richmond, North Yorkshire, Yorkshire, United Kingdom
Datix Limited
knowledge of programming languages, specifically Python and SQL. Expertise in data management, data architecture, and data visualization techniques. Experience with data processing frameworks like Apache Spark, Hadoop, or Flink. Strong understanding of database systems (SQL and NoSQL) and data warehousing technologies. Familiarity with cloud computing platforms (AWS, Azure More ❯
Employment Type: Permanent
Posted:
Apache Spark
the UK excluding London
10th Percentile
£45,800
25th Percentile
£52,500
Median
£65,000
75th Percentile
£77,500
90th Percentile
£93,250