Apache Spark Jobs in England

1 to 25 of 210 Apache Spark Jobs in England

Solutions Architect - Big Data and DevOps (f/m/d)

England, United Kingdom
Stackable
robust way possible! Diverse training opportunities and social benefits (e.g. UK pension schema) What do you offer? Strong hands-on experience working with modern Big Data technologies such as Apache Spark, Trino, Apache Kafka, Apache Hadoop, Apache HBase, Apache Nifi, Apache Airflow, Opensearch Proficiency in cloud-native technologies such as containerization and Kubernetes More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Software Engineer

Manchester, Lancashire, United Kingdom
Anaplan Inc
production issues. Optimize applications for performance and responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like Apache Spark, Databricks, Apache Pulsar, Apache Airflow, Temporal, and Apache Flink, sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes … Experience with cloud platforms like AWS, GCP, or Azure. DevOps Tools: Familiarity with containerization (Docker) and infrastructure automation tools like Terraform or Ansible. Real-time Data Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with Apache Spark, Databricks, or similar big data platforms for … processing large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with Apache Flink or other stream processing frameworks is a plus. Desired Skills Asynchronous Programming: Familiarity with asynchronous programming tools like Celery or asyncio. Frontend Knowledge More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Developer. C# + (either Clickhouse, SingleStore, Rockset, TimescaleDB) + open standard datalake (e.g. Iceberg or Delta tables, Apache Spark, Column store). £700/ Day. 6 month rolling. Hybrid.

Greater London, England, United Kingdom
Hybrid/Remote Options
CommuniTech Recruitment Group
Data Developer. C# + (either Clickhouse, SingleStore, Rockset, TimescaleDB) + open standard datalake (e.g. Iceberg or Delta tables, Apache Spark, Column store). £700/Day. 6 month rolling. Hybrid. My client is a top tier commodities trading firm that is looking for a strong C# Data Engineer. The key things in summary are: Strong experience of .NET … Have you worked with an analytical database such as Clickhouse, SingleStore, Rockset, TimescaleDB? Have you got any experience working with an open standard datalake (e.g. Iceberg or Delta tables, Apache Spark, Column store) Have you got any experience processing (e.g. ingesting into a database) a large amount of data (in batches would be fine)? Required Skills and Experience More ❯
Posted:

Data Engineer

Basingstoke, Hampshire, England, United Kingdom
INTEC SELECT LIMITED
Data Engineer - Azure Databricks , Apache Kafka Permanent Basingstoke (Hybrid - x2 PW) Circa £70,000 + Excellent Package Overview We're looking for a skilled Data Analytics Engineer to help drive the evolution of our clients data platform. This role is ideal for someone who thrives on building scalable data solutions and is confident working with modern tools such as … Azure Databricks , Apache Kafka , and Spark . In this role, you'll play a key part in designing, delivering, and optimising data pipelines and architectures. Your focus will be on enabling robust data ingestion and transformation to support both operational and analytical use cases. If you're passionate about data engineering and want to make a meaningful impact … in a collaborative, fast-paced environment, we want to hear from you !! Role and Responsibilities Designing and building scalable data pipelines using Apache Spark in Azure Databricks Developing real-time and batch data ingestion workflows, ideally using Apache Kafka Collaborating with data scientists, analysts, and business stakeholders to build high-quality data products Supporting the deployment and More ❯
Employment Type: Full-Time
Salary: £65,000 - £70,000 per annum
Posted:

Senior Data Platform Engineer

Luton, England, United Kingdom
Hybrid/Remote Options
easyJet
Job Accountabilities Develop robust, scalable data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploy machine learning models and algorithms aimed at … indexing, partitioning. Hands-on IaC development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data … e.g. access management, data privacy, handling of sensitive data (e.g. GDPR) Desirable Skills Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Understanding of the challenges faced in the design and development of a streaming data pipeline and the different options for processing unbounded data (pubsub More ❯
Posted:

Senior AWS Data Engineer

London, United Kingdom
Adroit People Ltd
Greetings! Adroit People is currently hiring Title: AWS Data Engineer Location: London, UK Work Mode: Hybrid Duration: 12 Months FTC Keywords: AWS, PYTHON, Glue, EMR Serverless, Lambda, and S3,SPARK Job Spec: WHAT YOU'LL BE DOING: We are building the next-generation data platform at FTSE Russell and we want you to shape it with us. Your role … will involve: Designing and developing scalable, testable data pipelines using Python and Apache Spark Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing Contributing to the development of a lakehouse architecture using Apache Iceberg Collaborating with business teams … ideally with type hints, linters, and tests like pytest) Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines Has experience with or is eager to learn Apache Spark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders More ❯
Employment Type: Permanent
Posted:

Senior / Lead Data Engineer

London Area, United Kingdom
Sahaj Software
while staying close to the code. Perfect if you want scope for growth without going “post-technical.” What you’ll do Design and build modern data platforms using Databricks, Apache Spark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, Apache Spark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
Posted:

Senior / Lead Data Engineer

City of London, London, United Kingdom
Sahaj Software
while staying close to the code. Perfect if you want scope for growth without going “post-technical.” What you’ll do Design and build modern data platforms using Databricks, Apache Spark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, Apache Spark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
Posted:

Data Engineer

City of London, London, England, United Kingdom
Equiniti
in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as Apache Spark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and More ❯
Employment Type: Full-Time
Salary: Competitive salary
Posted:

AWS Data Engineer

London Area, United Kingdom
Hybrid/Remote Options
N Consulting Global
generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: • Designing and developing scalable, testable data pipelines using Python and Apache Spark • Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 • Applying modern software engineering practices: version control, CI/CD, modular design, and automated … testing • Contributing to the development of a lakehouse architecture using Apache Iceberg • Collaborating with business teams to translate requirements into data-driven solutions • Building observability into data flows and implementing basic quality checks • Participating in code reviews, pair programming, and architecture discussions • Continuously learning about the financial indices domain and sharing insights with the team WHAT YOU'LL BRING … ideally with type hints, linters, and tests like pytest) Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines Has experience with or is eager to learn Apache Spark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders More ❯
Posted:

AWS Data Engineer

City of London, London, United Kingdom
Hybrid/Remote Options
N Consulting Global
generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: • Designing and developing scalable, testable data pipelines using Python and Apache Spark • Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 • Applying modern software engineering practices: version control, CI/CD, modular design, and automated … testing • Contributing to the development of a lakehouse architecture using Apache Iceberg • Collaborating with business teams to translate requirements into data-driven solutions • Building observability into data flows and implementing basic quality checks • Participating in code reviews, pair programming, and architecture discussions • Continuously learning about the financial indices domain and sharing insights with the team WHAT YOU'LL BRING … ideally with type hints, linters, and tests like pytest) Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines Has experience with or is eager to learn Apache Spark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders More ❯
Posted:

Staff Data Engineer

London, United Kingdom
Hybrid/Remote Options
Fruition Group
experience in a leadership or technical lead role, with official line management responsibility. Strong experience with modern data stack technologies, including Python, Snowflake, AWS (S3, EC2, Terraform), Airflow, dbt, Apache Spark, Apache Iceberg, and Postgres. Skilled in balancing technical excellence with business priorities in a fast-paced environment. Strong communication and stakeholder management skills, able to translate More ❯
Employment Type: Permanent
Posted:

AWS DATA ENGINEER

City of London, London, United Kingdom
Adroit People Limited (UK)
Greetings! Adroit People is currently hiring Title: Senior AWS Data Engineer Location: London, UK Work Mode: Hybrid-3 DAYS/WEEK Duration: 12 Months FTC Keywords: AWS,PYTHON,APACHE,SPARK,ETL Job Spec: We are building the next-generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: ∙ Designing … and developing scalable, testable data pipelines using Python and Apache Spark ∙ Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 ∙ Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing ∙ Contributing to the development of a lakehouse architecture using Apache Iceberg ∙ Collaborating with business teams to translate requirements More ❯
Posted:

AWS DATA ENGINEER

London Area, United Kingdom
Adroit People Limited (UK)
Greetings! Adroit People is currently hiring Title: Senior AWS Data Engineer Location: London, UK Work Mode: Hybrid-3 DAYS/WEEK Duration: 12 Months FTC Keywords: AWS,PYTHON,APACHE,SPARK,ETL Job Spec: We are building the next-generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: ∙ Designing … and developing scalable, testable data pipelines using Python and Apache Spark ∙ Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 ∙ Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing ∙ Contributing to the development of a lakehouse architecture using Apache Iceberg ∙ Collaborating with business teams to translate requirements More ❯
Posted:

Scala Developer

Northampton, England, United Kingdom
Capgemini
us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. YOUR ROLE We are looking for a skilled Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. YOUR PROFILE Develop, optimize, and maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross … functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and reliability. Write clean, maintainable, and well-documented code. Participate in code reviews, design discussions, and agile ceremonies. Implement data quality and governance best practices. Stay updated with More ❯
Posted:

Senior AWS Engineer

London, United Kingdom
Adroit People Ltd
Greetings! Adroit People is currently hiring Title: Senior AWS Data Engineer Location: London, UK Work Mode: Hybrid-3 DAYS/WEEK Duration: 12 Months FTC Keywords: AWS,PYTHON,APACHE,SPARK,ETL Job Spec: We are building the next-generation data platform at FTSE Russell and we want you to shape it with us. Your role will involve: Designing … and developing scalable, testable data pipelines using Python and Apache Spark Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing Contributing to the development of a lakehouse architecture using Apache Iceberg Collaborating with business teams to translate requirements More ❯
Employment Type: Permanent
Posted:

Data Engineer - ETL

Southam, England, United Kingdom
Electronic Arts (EA)
data security, privacy, and compliance frameworks ● Exposure to machine learning pipelines, MLOps, or AI-driven data products ● Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark ● Exposure to AI/ML concepts and collaboration with data science or AI teams. ● Experience integrating data solutions with AI/ML platforms or supporting AI-driven analytics More ❯
Posted:

Data Engineer - ETL

Guildford, England, United Kingdom
Electronic Arts (EA)
data security, privacy, and compliance frameworks ● Exposure to machine learning pipelines, MLOps, or AI-driven data products ● Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark ● Exposure to AI/ML concepts and collaboration with data science or AI teams. ● Experience integrating data solutions with AI/ML platforms or supporting AI-driven analytics More ❯
Posted:

Data & Analytics Practice:-Data Architect role- Junior level

England, United Kingdom
Infosys Consulting
modelling tools, data warehousing, ETL processes, and data integration techniques. Experience with at least one cloud data platform (e.g. AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow etc Good knowledge of stream and batch processing solutions like Apache Flink, Apache Kafka Good knowledge of log management, monitoring, and analytics solutions like Splunk, Elastic Stack, New Relic etc Given that this is just a short snapshot of the role we encourage you to apply even if you don't meet all the requirements listed above. We are looking for individuals who strive to make an impact and More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Architect

England, United Kingdom
Infosys Consulting
modelling tools, data warehousing, ETL processes, and data integration techniques. Experience with at least one cloud data platform (e.g. AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow Good knowledge of stream and batch processing solutions like Apache Flink … Apache Kafka Good knowledge of log management, monitoring, and analytics solutions like Splunk, Elastic Stack, New Relic Note: The following line contains removed formatting for safety. Given that this is just a short snapshot of the role we encourage you to apply even if you don't meet all the requirements listed above. We are looking for individuals who More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Platform Engineer

Luton, England, United Kingdom
Hybrid/Remote Options
easyJet
field. Technical Skills Required • Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). • Experience with Apache Spark or any other distributed data programming frameworks. • Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. • Experience with cloud infrastructure like AWS … Skills • Hands-on development experience in an airline, e-commerce or retail industry • Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. • Experience implementing end-to-end monitoring, quality checks, lineage tracking and automated alerts to ensure reliable and trustworthy data across the platform. • Experience of More ❯
Posted:

Senior Software Engineer, Data, Platform - Enterprise Engineering

Manchester, Lancashire, United Kingdom
Roku, Inc
s expertise spans a wide range of technologies, including Java and Python based MicroServices, Data Platform services, AWS/GCP cloud backend systems, Big Data technologies like Hive and Spark, and modern Web applications. With a globally distributed presence across the US, India and Europe, the team thrives on collaboration, bringing together diverse perspectives to solve complex challenges. At … skills We're excited if you have 7+ years of experience delivering multi tier, highly scalable, distributed web applications Experience working with Distributed computing frameworks knowledge: Hive/Hadoop, Apache Spark, Kafka, Airflow Working with programming languages Python , Java, SQL. Working on building ETL (Extraction Transformation and Loading) solution using PySpark Experience in SQL/NoSQL database design More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Principal Engineer, BCG Expand, London

London, United Kingdom
Boston Consulting Group
two of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineer

City of London, London, United Kingdom
Hybrid/Remote Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and Apache Spark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns More ❯
Posted:

Data Engineer

London Area, United Kingdom
Hybrid/Remote Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and Apache Spark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns More ❯
Posted:
Apache Spark
England
10th Percentile
£51,625
25th Percentile
£61,250
Median
£78,000
75th Percentile
£111,250
90th Percentile
£140,000