Apache Spark Jobs in London

76 to 100 of 787 Apache Spark Jobs in London

Principal Data & AI Consultant

London, England, United Kingdom
Ciklum
proficiency, with experience in MS SQL Server or PostgreSQL Familiarity with platforms like Databricks and Snowflake for data engineering and analytics Experience working with Big Data technologies (e.g., Hadoop, Apache Spark) Familiarity with NoSQL databases (e.g., columnar or graph databases like Cassandra, Neo4j) Research experience with peer-reviewed publications Certifications in cloud-based machine learning services (AWS, Azure More ❯
Posted:

Senior Data Science Engineer

City of London, England, United Kingdom
Hybrid / WFH Options
Parser
for you. The impact you'll make: Design and develop reusable Python packages (pip/conda) to productionize data science solutions. Process big data at scale using Hadoop/Spark and optimize workflows with Airflow/orchestration tools. Build scalable applications and REST/RPC APIs (Flask/FastAPI/gRPC) for global products. Advocate for engineering best practices … including CI/CD, DevOps, and containerization (Docker/Kubernetes). Mentor junior engineers and lead initiatives to enhance research tooling and dashboards. Languages: Python, SQL Tools: Spark, Hadoop, Airflow, Docker, FastAPI/Flask Cloud: AWS, CI/CD pipelines (Jenkins, Git) What you'll bring to us: 7+ years in data engineering/science, with expertise in Python … Spark, and SQL. Proven experience with big data processing and scalable system design. Strong knowledge of CI/CD, Agile frameworks, and cloud platforms (AWS). Bonus: Familiarity with ML techniques (regression, clustering) and retail data. Location: Hybrid model at the office located in Hammersmith Frequency: The employee must be available for up to 20-25% office presence More ❯
Posted:

Senior Data Engineer

London, England, United Kingdom
Hybrid / WFH Options
Foundever
and monitoring systems. Skills/Abilities/Knowledge Proficiency in data modeling and database management. Strong programming skills in Python and SQL. Knowledge of big data technologies like Hadoop, Spark, and NoSQL databases. Deep experience with ETL processes and data pipeline development. Strong understanding of data warehousing concepts and best practices. Experience with cloud platforms such as AWS and … Science or Engineering Languages Excellent command of English. French and Spanish language skills are a bonus. Tools and Applications Programming languages and tools: Python, SQL. Big data technologies: Hadoop, Spark, NoSQL databases. ETL and data pipeline tools: AWS Glue, Airflow. Cloud platforms: AWS, Azure. Data visualization tools and data modeling software. Version control systems and collaborative development platforms. Our More ❯
Posted:

Product Engineering Lead (Supply and R&D)

London, United Kingdom
Mars, Incorporated and its Affiliates
priorities aimed at maximizing value through data utilization. Knowled g e/Experience Expertise in Commercial/Procurement Analytics. Experience in SAP (S/4 Hana). Experience with Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL … processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi g Query). Experience with DataOps practices and tools, includin g CI/CD for data pipelines. More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Product Engineering Lead (Supply and R&D)

City Of Westminster, London, United Kingdom
Mars Petcare UK
priorities aimed at maximizing value through data utilization. Knowled g e/Experience Expertise in Commercial/Procurement Analytics. Experience in SAP (S/4 Hana). Experience with Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL … processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi g Query). Experience with DataOps practices and tools, includin g CI/CD for data pipelines. More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Lead Data Engineer

London, England, United Kingdom
Landmark Information
championing best practices in coding, architecture, and performance. Foster a team culture focused on continuous improvement, where learning is encouraged. Leverage Big Data Technologies: Utilise tools such as Hadoop, Spark, and Kafka to design and manage large-scale on-prem data processing systems. Collaboration: Collaborate with cross-functional teams and stakeholders to deliver high-impact solutions that align with … ability to explain technical concepts to a range of audiences Able to provide coaching and training to less experienced members of the team Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala, etc (minimum 2) Extensive Big Data hands-on experience (coding/configuration/automation/monitoring/security/etc) is a must Significant AWS More ❯
Posted:

SC cleared - Azure Data Engineer

London, England, United Kingdom
Hybrid / WFH Options
Methods
with data modelling, data warehousing, and lakehouse architectures. - Knowledge of DevOps practices, including CI/CD pipelines and version control (eg, Git). - Understanding of big data technologies (eg, Spark, Hadoop) is a plus. Seniority level Seniority level Mid-Senior level Employment type Employment type Contract Job function Job function Information Technology Referrals increase your chances of interviewing at More ❯
Posted:

Data Lakehouse Developer

London, United Kingdom
Hybrid / WFH Options
ZILO
EMR, Glue). Familiarity with programming languages such as Python or Java. Understanding of data warehousing concepts and data modeling techniques. Experience working with big data technologies (e.g., Hadoop, Spark) is an advantage. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Benefits Enhanced leave - 38 days inclusive of 8 UK Public Holidays Private Health Care including More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

SAS Data Engineer

London, England, United Kingdom
Talan Group
ll also have: Experience with Relational Databases and Data Warehousing concepts. Experience with Enterprise ETL tools such as Informatica, Talend, DataStage, or Alteryx. Project experience with technologies like Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross-platform experience. Team building and leadership skills. You must be: Willing to work on client sites, potentially for extended periods. Willing to travel for More ❯
Posted:

SAS Data Engineer

London, United Kingdom
Talan Group
of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building and leading. You must be: Willing to work on client sites, potentially for extended periods. Willing to travel More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Application Architect

London, United Kingdom
Iamwarpspeed
Strong programming skills (Python, Java, C++) and experience with DevOps practices (CI/CD). Familiarity with containerization (Docker, Kubernetes), RESTful APIs, microservices architecture, and big data technologies (Hadoop, Spark, Flink). Knowledge of NoSQL databases (MongoDB, Cassandra, DynamoDB), message queueing systems (Kafka, RabbitMQ), and version control systems (Git). Preferred Skills: Experience with natural language processing libraries such More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineer

London, England, United Kingdom
Insight Global
quality code (e.g., data structures, error handling, code optimization). Proficiency in SQL – comfortable designing databases, writing complex queries, and handling performance tuning. Experience with Databricks (or a comparable Spark environment) – ability to build data pipelines, schedule jobs, and create dashboards/notebooks. Experience with Azure services (Data Factory, Synapse, or similar) and knowledge of cloud-based data solutions. More ❯
Posted:

Senior Data Engineer New London (Hybrid)

London, United Kingdom
Hybrid / WFH Options
freemarketFX Limited
cron jobs , job orchestration, and error monitoring tools. Good to have Experience with Azure Bicep or other Infrastructure-as-Code tools. Exposure to real-time/streaming data (Kafka, Spark Streaming, etc.). Understanding of data mesh , data contracts , or domain-driven data architecture . Hands on experience with MLflow and Llama Apply for this job indicates a required More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Lead Data Scientist

London, United Kingdom
Live Nation
e.g. R, Python Strong knowledge of deploying end-to-end machine learning models in Databricks utilizing Pyspark, MLflow and workflows Strong knowledge of data platforms and tools, including Hadoop, Spark, SQL, and NoSQL databases Communicate algorithmic solutions in a clear, understandable way. Leverage data visualization techniques and tools to effectively demonstrate patterns, outliers and exceptional conditions in the data More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

AI Data Scientist

London, England, United Kingdom
Hybrid / WFH Options
Rein-Ton
See From You Qualifications Proven experience as a Data Engineer with a strong background in data pipelines. Proficiency in Python, Java, or Scala, and big data technologies (e.g., Hadoop, Spark, Kafka). Experience with Databricks, Azure AI Services, and cloud platforms (AWS, Google Cloud, Azure). Solid understanding of SQL and NoSQL databases. Strong problem-solving skills and ability More ❯
Posted:

Data Engineer (AWS)

London, England, United Kingdom
Hybrid / WFH Options
EXL Service
its development/management Qualifications and experience we consider to be essential for the role: 5+ years of experience in Data Engineering: SQL, DWH (Redshift or Snowflake), Python (PySpark), Spark and associated data engineering jobs. Experience with AWS ETL pipeline services: Lambda, S3, EMR/Glue, Redshift(or Snowflake), step-functions (Preferred) Experience with building and supporting cloud based More ❯
Posted:

AI Solutions Architect - London

London, England, United Kingdom
Neo4j
Strong foundation in data engineering, data analytics, or data science, with the ability to work effectively with various data types and sources. Experience using big data technologies (e.g. Hadoop, Spark, Hive) and database management systems (e.g. SQL and NoSQL). Graph Database Expertise : Deep understanding of graph database concepts, data modeling, and query languages (e.g., Cypher). Demonstrate hands More ❯
Posted:

Senior Cloud and Data Architect

City of London, London, United Kingdom
Gazelle Global
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Required Skills : Mandatory Skills [at least 2 Hyperscalers]: GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF. Preferred Skills : Designing Databricks based More ❯
Posted:

Senior Cloud and Data Architect

London Area, United Kingdom
Gazelle Global
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Required Skills : Mandatory Skills [at least 2 Hyperscalers]: GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF. Preferred Skills : Designing Databricks based More ❯
Posted:

Senior Cloud and Data Solution Architect

City of London, London, United Kingdom
Coforge
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based More ❯
Posted:

Senior Cloud and Data Solution Architect

London Area, United Kingdom
Coforge
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based More ❯
Posted:

We are hiring: Data Scientist, London

London, United Kingdom
Hybrid / WFH Options
The Society for Location Analysis
using data manipulation and machine learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures Modelling More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Solutions Architect (m/f/d) - EMEA

London, England, United Kingdom
Ververica | Original creators of Apache Flink®
Solutions Architect to join our Customer Success team in EMEA. In this highly technical role, you will design, implement, and optimize real-time data streaming solutions, focusing specifically on Apache Flink and Ververica's Streaming Data Platform. You'll collaborate directly with customers and cross-functional teams, leveraging deep expertise in distributed systems, event-driven architectures, and cloud-native … implementation, architecture consulting, and performance optimization. Key Responsibilities Analyze customer requirements and design scalable, reliable, and efficient stream-processing solutions Provide technical implementation support and hands-on expertise deploying Apache Flink and Ververica's platform in pre-sales and post-sales engagements Develop prototypes and proof-of-concept (PoC) implementations to validate and showcase solution feasibility and performance Offer … technical reviews, and promote best practices in stream processing Deliver professional services engagements, including technical training sessions, workshops, and performance optimization consulting Act as a subject matter expert on Apache Flink, real-time stream processing, and distributed architectures Create and maintain high-quality technical documentation, reference architectures, best-practice guides, and whitepapers Stay informed on emerging streaming technologies, cloud More ❯
Posted:

Data Engineer ( DV Cleared )

London, England, United Kingdom
Hybrid / WFH Options
LHH
large-scale data pipelines in secure or regulated environments Ingest, process, index, and visualise data using the Elastic Stack (Elasticsearch, Logstash, Kibana) Build and maintain robust data flows with Apache NiFi Implement best practices for handling sensitive data, including encryption, anonymisation, and access control Monitor and troubleshoot real-time data pipelines to ensure high performance and reliability Write efficient … Skills and Experience: 3+ years’ experience as a Data Engineer in secure, regulated, or mission-critical environments Proven expertise with the Elastic Stack (Elasticsearch, Logstash, Kibana) Solid experience with Apache NiFi Strong understanding of data security, governance, and compliance requirements Working knowledge of cloud platforms (AWS, Azure, or GCP), particularly in secure deployments Experience using Infrastructure as Code tools … stakeholder management skills Detail-oriented with a strong focus on data accuracy, quality, and reliability Desirable (Nice to Have): Background in defence, government, or highly regulated sectors Familiarity with Apache Kafka, Spark, or Hadoop Experience with Docker and Kubernetes Use of monitoring/alerting tools such as Prometheus, Grafana, or ELK Understanding of machine learning algorithms and data More ❯
Posted:

Data Engineer - 1 year contract

London, England, United Kingdom
CHUBB
focus on automation and data process improvement. Demonstrated experience in designing and implementing automation frameworks and solutions for data pipelines and transformations. Strong understanding of data processing frameworks (e.g., Apache Spark, Apache Kafka) and database technologies (e.g., SQL, NoSQL). Expertise in programming languages relevant to data engineering (e.g., Python, SQL). Hands on data preparation activities … data and its role in improving organizational efficiency. Strong communication, collaboration, and leadership skills, with the ability to work effectively across departments and with stakeholders at all levels. Databricks Spark and Microsoft Azure certifications are a plus. About Us Chubb is a world leader in insurance. With operations in 54 countries, Chubb provides commercial and personal property and casualty More ❯
Posted:
Apache Spark
London
10th Percentile
£56,250
25th Percentile
£75,000
Median
£97,500
75th Percentile
£115,000
90th Percentile
£138,750