Apache Spark Jobs in England

1 to 25 of 251 Apache Spark Jobs in England

Data Architect (Trading) (London)

London, UK
Hybrid / WFH Options
Keyrock
Expertise in data warehousing, data modelling, and data integration. Experience in MLOps and machine learning pipelines. Proficiency in SQL and data manipulation languages. Experience with big data platforms (including Apache Arrow, Apache Spark, Apache Iceberg, and Clickhouse) and cloud-based infrastructure on AWS. Education & Qualifications Bachelors or Masters degree in Computer Science, Engineering, or a related More ❯
Employment Type: Full-time
Posted:

Data Engineer

London, United Kingdom
Sandtech
extract data from diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging … SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines and version control systems like Git. Knowledge of ETL tools and … technologies such as Apache Airflow, Informatica, or Talend. Knowledge of data governance and best practices in data management. Familiarity with cloud platforms and services such as AWS, Azure, or GCP for deploying and managing data solutions. Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. SQL (for database management and querying More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Cloudera Developer

Chester, Cheshire, United Kingdom
Pontoon
data-based insights, collaborating closely with stakeholders. Passionately discover hidden solutions in large datasets to enhance business outcomes. Design, develop, and maintain data processing pipelines using Cloudera technologies, including Apache Hadoop, Apache Spark, Apache Hive, and Python. Collaborate with data engineers and scientists to translate data requirements into technical specifications. Develop and maintain frameworks for efficient More ❯
Employment Type: Contract
Posted:

Big Data Solutions Architect (DE, Spark Architecture)

London, United Kingdom
Databricks Inc
in either Python or Scala Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one Deep experience with distributed computing with Apache Spark and knowledge of Spark runtime internals Familiarity with CI/CD for production deployments Working knowledge of MLOps Design and deployment of performant end-to-end … Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineer - Manager

London, United Kingdom
Cloud Decisions
Azure, AWS, GCP) Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, Apache Flink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Solutions Architect (Data Analytics)- Presales, RFP creation (London)

London, UK
Vallum Associates
technologies Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … skills. A minimum of 5 years experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for More ❯
Employment Type: Full-time
Posted:

Sr. Data Scientist / Machine Learning Engineer - GenAI (London)

London, UK
Databricks
/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through ML [Preferred] Experience working with Databricks & Apache Spark to process large-scale distributed datasets About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide including Comcast, Cond Nast, Grammarly, and … Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Employment Type: Full-time
Posted:

Senior Data Scientist / Machine Learning Engineer - GenAI

London, United Kingdom
Databricks Inc
/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through ML Preferred Experience working with Databricks & Apache Spark to process large-scale distributed datasets As a client-facing role, travel may be necessary to support meetings and engagements. About Databricks Databricks is the data and … Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Sales Account Manager

London, United Kingdom
Hybrid / WFH Options
Datapao
company covering the entire data transformation from architecture to implementation. Beyond delivering solutions, we also provide data & AI training and enablement. We are backed by Databricks - the creators of Apache Spark, and act as a delivery partner and training provider for them in Europe. Additionally, we are Microsoft Gold Partners in delivering cloud migration and data architecture on … company covering the entire data transformation from architecture to implementation. Beyond delivering solutions, we also provide data & AI training and enablement. We are backed by Databricks - the creators of Apache Spark, and act as a delivery partner and training provider for them in Europe. Additionally, we are Microsoft Gold Partners in delivering cloud migration and data architecture on More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Infrastructure/ Platform Engineer Apache

London, United Kingdom
Experis - ManpowerGroup
Role Title: Infrastructure/Platform Engineer - Apache Duration: 9 Months Location: Remote Rate: £ - Umbrella only Would you like to join a global leader in consulting, technology services and digital transformation? Our client is at the forefront of innovation to address the entire breadth of opportunities in the evolving world of cloud, digital and platforms. Role purpose/summary ? Refactor … prototype Spark jobs into production-quality components, ensuring scalability, test coverage, and integration readiness. ? Package Spark workloads for deployment via Docker/Kubernetes and integrate with orchestration systems (e.g., Airflow, custom schedulers). ? Work with platform engineers to embed Spark jobs into InfoSum's platform APIs and data pipelines. ? Troubleshoot job failures, memory and resource issues, and … execution anomalies across various runtime environments. ? Optimize Spark job performance and advise on best practices to reduce cloud compute and storage costs. ? Guide engineering teams on choosing the right execution strategies across AWS, GCP, and Azure. ? Provide subject matter expertise on using AWS Glue for ETL workloads and integration with S3 and other AWS-native services. ? Implement observability tooling More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Infrastructure/ Platform Engineer Apache

London, United Kingdom
Experis
Role Title: Infrastructure/Platform Engineer - Apache Duration: 9 Months Location: Remote Rate: £ - Umbrella only Would you like to join a global leader in consulting, technology services and digital transformation? Our client is at the forefront of innovation to address the entire breadth of opportunities in the evolving world of cloud, digital and platforms. Role purpose/summary ? Refactor … prototype Spark jobs into production-quality components, ensuring scalability, test coverage, and integration readiness. ? Package Spark workloads for deployment via Docker/Kubernetes and integrate with orchestration systems (e.g., Airflow, custom schedulers). ? Work with platform engineers to embed Spark jobs into InfoSum's platform APIs and data pipelines. ? Troubleshoot job failures, memory and resource issues, and … execution anomalies across various runtime environments. ? Optimize Spark job performance and advise on best practices to reduce cloud compute and storage costs. ? Guide engineering teams on choosing the right execution strategies across AWS, GCP, and Azure. ? Provide subject matter expertise on using AWS Glue for ETL workloads and integration with S3 and other AWS-native services. ? Implement observability tooling More ❯
Employment Type: Contract
Posted:

Machine Learning Engineer (London)

London, UK
Hybrid / WFH Options
Synechron
Synechron is looking for a skilled Machine Learning Developer with expertise in Spark ML to work with a leading financial organisation on a global programme of work. The role involves predictive modeling, and deploying training and inference pipelines on distributed systems such as Hadoop. The ideal candidate will design, implement, and optimise machine learning solutions for large-scale data … processing and predictive analytics. Role: Develop and implement machine learning models using Spark ML for predictive analytics Design and optimise training and inference pipelines for distributed systems (e.g., Hadoop) Process and analyse large-scale datasets to extract meaningful insights and features Collaborate with data engineers to ensure seamless integration of ML workflows with data pipelines Evaluate model performance and … time and batch inference Monitor and troubleshoot deployed models to ensure reliability and performance Stay updated with advancements in machine learning frameworks and distributed computing technologies Experience: Proficiency in Apache Spark and Spark MLlib for machine learning tasks Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering) Experience with distributed systems like Hadoop for data storage More ❯
Employment Type: Full-time
Posted:

Head of Data Engineering

London, United Kingdom
Hybrid / WFH Options
Zego
Skills: Proven expertise in designing, building, and operating data pipelines, warehouses, and scalable data architectures. Deep hands-on experience with modern data stacks. Our tech includes Python, SQL, Snowflake, Apache Iceberg, AWS S3, PostgresDB, Airflow, dbt, and Apache Spark, deployed via AWS, Docker, and Terraform. Experience with similar technologies is essential. Coaching & Growth Mindset: Passion for developing More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Test Engineer

Nursling, Southampton, Hampshire, England, United Kingdom
Hybrid / WFH Options
Ordnance Survey
lead Support the Ordnance Survey Testing Community, with common standards such as metrics and use of test tools Here is a snapshot of the technologies that we use Scala, Apache Spark, Databricks, Apache Parquet, YAML, Azure Cloud Platform, Azure DevOps (Test plans, Backlogs, Pipelines), GIT, GeoJSON What we're looking for Highly skilled in creating, maintaining and More ❯
Employment Type: Full-Time
Salary: £41,892 - £48,874 per annum
Posted:

Senior Data Engineer (UK)

London, United Kingdom
Hybrid / WFH Options
Atreides LLC
platform components. Big Data Architecture: Build and maintain big data architectures and data pipelines to efficiently process large volumes of geospatial and sensor data. Leverage technologies such as Hadoop, Apache Spark, and Kafka to ensure scalability, fault tolerance, and speed. Geospatial Data Integration: Develop systems that integrate geospatial data from a variety of sources (e.g., satellite imagery, remote … driven applications. Familiarity with geospatial data formats (e.g., GeoJSON, Shapefiles, KML) and tools (e.g., PostGIS, GDAL, GeoServer). Technical Skills: Expertise in big data frameworks and technologies (e.g., Hadoop, Spark, Kafka, Flink) for processing large datasets. Proficiency in programming languages such as Python, Java, or Scala, with a focus on big data frameworks and APIs. Experience with cloud services … or related field. Experience with data visualization tools and libraries (e.g., Tableau, D3.js, Mapbox, Leaflet) for displaying geospatial insights and analytics. Familiarity with real-time stream processing frameworks (e.g., Apache Flink, Kafka Streams). Experience with geospatial data processing libraries (e.g., GDAL, Shapely, Fiona). Background in defense, national security, or environmental monitoring applications is a plus. Compensation and More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Principal Engineer, BCG Expand, London

London, United Kingdom
Boston Consulting Group
two of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Machine Learning Engineer (London)

Hanwell, Greater London, UK
Lumilinks Group Ltd
and managing machine learning models and infrastructure. Data Management Knowledge: Understanding of data management principles, including experience with databases (SQL and NoSQL) and familiarity with big data frameworks like Apache Spark or Hadoop. Knowledge of data ingestion, storage, and management is essential. Monitoring and Logging Tools : Experience with monitoring and logging tools to track system performance and model More ❯
Employment Type: Full-time
Posted:

Senior Data Engineer

London, United Kingdom
Mastek UK
Cleared: Required Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
Employment Type: Permanent
Salary: £60000 - £80000/annum
Posted:

Lead Data Scientist, Machine Learning Engineer 2025- UK

London, United Kingdom
Hybrid / WFH Options
Aimpoint Digital
science use-cases across various industries Design and develop feature engineering pipelines, build ML & AI infrastructure, deploy models, and orchestrate advanced analytical insights Write code in SQL, Python, and Spark following software engineering best practices Collaborate with stakeholders and customers to ensure successful project delivery Who we are looking for We are looking for collaborative individuals who want to More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Head of AI (London)

London, UK
Scrumconnect Consulting
SageMaker, GCP AI Platform, Azure ML, or equivalent). Solid understanding of data-engineering concepts: SQL/noSQL, data pipelines (Airflow, Prefect, or similar), and batch/streaming frameworks (Spark, Kafka). Leadership & Communication: Proven ability to lead cross-functional teams in ambiguous startup settings. Exceptional written and verbal communication skillsable to explain complex concepts to both technical and More ❯
Employment Type: Full-time
Posted:

Data Engineer (SC cleared)

Guildford, Surrey, United Kingdom
Hybrid / WFH Options
Stott and May
Start: ASAP Duration: 12 months Location: Mostly Remote - must have access to London or Bristol Pay: negotiable, INSIDE IR35 Responsibilities: - Design, implement robust ETL/ELT data pipelines using Apache Airflow - Build ingestion processes from internal systems and APIs, using Kafka, Spark, AWS - Develop and maintain data lakes and warehouses (AWS S3, Redshift) - Ensuring governance using automated testing … manage CI/CD pipelines for data deployments and ensure version control of DAGs - Apply best practice in security and compliance Required Tech Skills: - Python and SQL for processing - Apache Airflow, writing Airflow DAGs and configuring airflow jobs - AWS cloud platform and services like S3, Redshift - Familiarity with big data processing using Apache Spark - Knowledge of modelling More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

SAS Data Engineer

London, United Kingdom
Talan Group
of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building and leading. You must be: Willing to work on client sites, potentially for extended periods. Willing to travel More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Junior Data Engineer

London, United Kingdom
Curveanalytics
Maths or similar Science or Engineering discipline Strong Python and other programming skills (Java and/or Scala desirable) Strong SQL background Some exposure to big data technologies (Hadoop, spark, presto, etc.) NICE TO HAVES OR EXCITED TO LEARN: Some experience designing, building and maintaining SQL databases (and/or NoSQL) Some experience with designing efficient physical data models More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Enterprise Data & Analytics Platforms Director

Slough, Berkshire, United Kingdom
Mars, Incorporated and its Affiliates
in data engineering, architecture, or platform management roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design and implementation. Hands More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

Leeds, West Yorkshire, Yorkshire, United Kingdom
The Bridge (IT Recruitment) Limited
able to work across full data cycle. • Proven Experience working with AWS data technologies (S3, Redshift, Glue, Lambda, Lake formation, Cloud Formation), GitHub, CI/CD • Coding experience in Apache Spark, Iceberg or Python (Pandas) • Experience in change and release management. • Experience in Database Warehouse design and data modelling • Experience managing Data Migration projects. • Cloud data platform development … the AWS services like Redshift, Lambda,S3,Step Functions, Batch, Cloud formation, Lake Formation, Code Build, CI/CD, GitHub, IAM, SQS, SNS, Aurora DB • Good experience with DBT, Apache Iceberg, Docker, Microsoft BI stack (nice to have) • Experience in data warehouse design (Kimball and lake house, medallion and data vault) is a definite preference as is knowledge of … other data tools and programming languages such as Python & Spark and Strong SQL experience. • Experience is building Data lake and building CI/CD data pipelines • A candidate is expected to understand and can demonstrate experience across the delivery lifecycle and understand both Agile and Waterfall methods and when to apply these. Experience: This position requires several years of More ❯
Employment Type: Permanent
Salary: £65,000
Posted:
Apache Spark
England
10th Percentile
£46,975
25th Percentile
£56,250
Median
£75,000
75th Percentile
£95,000
90th Percentile
£115,000