Apache Jobs in the UK

301 to 325 of 988 Apache Jobs in the UK

Data Engineer Devi Technologies

London, England, United Kingdom
Devitechs
Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, MySQL, Cassandra) ️ Familiarity with cloud data platforms such as AWS, Google Cloud, or Azure ️ Experience with data processing frameworks (e.g., Apache Spark, Apache Kafka) and ETL tools #J-18808-Ljbffr More ❯
Posted:

Data Engineer - Telecom

City of London, London, United Kingdom
Response Informatics
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, Apache Flink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI/ More ❯
Posted:

Data Engineer - Telecom

London Area, United Kingdom
Response Informatics
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, Apache Flink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI/ More ❯
Posted:

Data Engineer - Telecom

South East London, England, United Kingdom
Response Informatics
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, Apache Flink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI/ More ❯
Posted:

Senior Data Engineer

London, England, United Kingdom
Hybrid / WFH Options
Widen the Net Limited
will develop scalable data pipelines, ensure data quality, and support business decision-making with high-quality datasets. Work across technology stack: SQL, Python, ETL, Big Query, Spark, Hadoop, Git, Apache Airflow, Data Architecture, Data Warehousing Design and develop scalable ETL pipelines to automate data processes and optimize delivery Implement and manage data warehousing solutions, ensuring data integrity through rigorous … testing and validation Lead, plan and execute workflow migration and data orchestration using Apache Airflow Focus on data engineering and data analytics Requirements: 5+ years of experience in SQL 5+ years of development in Python MUST have strong experience in Apache Airflow Experience with ETL tools, data architecture, and data warehousing solutions This contract is £450 per day More ❯
Posted:

Senior Data Engineer (Remote)

South East, United Kingdom
Hybrid / WFH Options
Circana
In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … significant impact, we encourage you to apply! Job Responsibilities ETL/ELT Pipeline Development: Design, develop, and optimize efficient and scalable ETL/ELT pipelines using Python, PySpark, and Apache Airflow. Implement batch and real-time data processing solutions using Apache Spark. Ensure data quality, governance, and security throughout the data lifecycle. Cloud Data Engineering: Manage and optimize … effectiveness. Implement and maintain CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Develop and optimize large-scale data processing pipelines using Apache Spark and PySpark. Implement data partitioning, caching, and performance tuning techniques to enhance Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and More ❯
Employment Type: Permanent
Posted:

Senior Data Engineer (Remote)

London, England, United Kingdom
Hybrid / WFH Options
Circana
In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … significant impact, we encourage you to apply! Job Responsibilities ETL/ELT Pipeline Development: Design, develop, and optimize efficient and scalable ETL/ELT pipelines using Python, PySpark, and Apache Airflow. Implement batch and real-time data processing solutions using Apache Spark. Ensure data quality, governance, and security throughout the data lifecycle. Cloud Data Engineering: Manage and optimize … effectiveness. Implement and maintain CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Develop and optimize large-scale data processing pipelines using Apache Spark and PySpark. Implement data partitioning, caching, and performance tuning techniques to enhance Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and More ❯
Posted:

Regulatory Reporting -Finance IT - Sheffield, UK.

Sheffield, England, United Kingdom
Natobotics
Experience with working with Cloud Computing (Google Cloud Platform preferable). Strong SQL skills and proficiency in at least one programme language e.g. Python. Experience with data processing framework Apache Flink. Experience with workflow management tool Apache Airflow. Excellent communication and collaboration skills, with the ability to effectively interact with both technical and non-technical stakeholders. Strong problem More ❯
Posted:

Senior DevOps Engineer

London, England, United Kingdom
Hybrid / WFH Options
BI:PROCSI
around release management, testing, and automation to ensure successful project delivery, adhering to client timelines and quality standards. Implement and manage real-time and batch data processing frameworks (e.g., Apache Kafka, Apache Spark, Google Cloud Dataproc) in line with project needs. Build and maintain robust monitoring, logging, and alerting systems for client projects, ensuring system health and performance … Strong programming and scripting skills in languages like Python, Bash, or Go to automate tasks and build necessary tools. Expertise in designing and optimising data pipelines using frameworks like Apache Airflow or equivalent. Demonstrated experience with real-time and batch data processing frameworks, including Apache Kafka, Apache Spark, or Google Cloud Dataflow. Proficiency in CI/CD More ❯
Posted:

Senior Data Scientist (Equity only)

United Kingdom
Hybrid / WFH Options
Luupli
tools, and statistical packages. Strong analytical, problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as Apache Hadoop, Apache Spark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with data governance and security best practices More ❯
Posted:

Senior Engineer - Data

Glasgow, Scotland, United Kingdom
Hybrid / WFH Options
Eden Scott
cutting-edge technologies. About the Role You’ll be part of an agile, cross-functional team building a powerful data platform and intelligent search engine. Working with technologies like Apache Lucene, Solr, and Elasticsearch, you'll contribute to the design and development of scalable systems, with opportunities to explore machine learning, AI-driven categorisation models, and vector search. What … You’ll Be Doing Design and build high-performance data pipelines and search capabilities. Develop solutions using Apache Lucene, Solr, or Elasticsearch. Implement scalable, test-driven code in Java and Python. Work collaboratively with Business Analysts, Data Engineers, and UI Developers. Contribute across the stack – from React/TypeScript front end to Java-based backend services. Leverage cloud infrastructure … leading data sets. Continuous improvements to how data is processed, stored, and presented. Your Profile Strong experience in Java development, with some exposure to Python. Hands-on knowledge of Apache Lucene, Solr, or Elasticsearch (or willingness to learn). Experience in large-scale data processing and building search functionality. Skilled with SQL and NoSQL databases. Comfortable working in Agile More ❯
Posted:

Solutions Architect (Data Analytics)

City of London, London, United Kingdom
Vallum Associates
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions More ❯
Posted:

Senior Data Engineer

West Bromwich, England, United Kingdom
Hybrid / WFH Options
Leonardo UK Ltd
exciting and critical challenges to the UK’s digital landscape. This role requires strong expertise in building and managing data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi. The successful candidate will design, implement, and maintain scalable, secure data solutions, ensuring compliance with strict security standards and regulations. This is a UK based onsite role with … the option of compressed hours. The role will include: Design, develop, and maintain secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi. Implement data ingestion, transformation, and integration processes, ensuring data quality and security. Collaborate with data architects and security teams to ensure compliance with security policies and data governance standards. Manage and … experience working as a Data Engineer in secure or regulated environments. Expertise in the Elastic Stack (Elasticsearch, Logstash, Kibana) for data ingestion, transformation, indexing, and visualization. Strong experience with Apache NiFi for building and managing complex data flows and integration processes. Knowledge of security practices for handling sensitive data, including encryption, anonymization, and access control. Familiarity with data governance More ❯
Posted:

Senior Python Developer

City Of London, England, United Kingdom
developrec
engineers + external partners) across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, Apache Iceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering direction on software stack, best practices, and … especially AWS), and orchestration technologies Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, Apache Iceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and mentoring teams Worked in a startup More ❯
Posted:

Solutions Architect (Data Analytics)

London, United Kingdom
Vallum
technologies - Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years' experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache Spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Solutions Architect (Data Analytics)- Presales, RFP creation

City of London, London, United Kingdom
Vallum Associates
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions More ❯
Posted:

Solutions Architect (Data Analytics)- Presales, RFP creation

London Area, United Kingdom
Vallum Associates
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills Designing Databricks based solutions More ❯
Posted:

Data Engineer

London, England, United Kingdom
GSR
a focus on data quality and reliability. Infrastructure & Architecture Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., Apache Flink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and … ELT pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as Apache Flink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. … Additional Strengths Experience with orchestration tools like Apache Airflow. Knowledge of real-time data processing and event-driven architectures. Familiarity with observability tools and anomaly detection for production systems. Exposure to data visualization platforms such as Tableau or Looker. Relevant cloud or data engineering certifications. What we offer: A collaborative and transparent company culture founded on Integrity, Innovation and More ❯
Posted:

Solutions Architect (Data Analytics)

Slough, England, United Kingdom
JR United Kingdom
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for Azure More ❯
Posted:

Machine Learning Engineer, II

London, United Kingdom
Spotify
Python, or similar languages. Experience with TensorFlow, PyTorch, Scikit-learn, etc. is a strong plus. You have some experience with large scale, distributed data processing frameworks/tools like Apache Beam, Apache Spark, or even our open source API for it - Scio, and cloud platforms like GCP or AWS. You care about agile software processes, data-driven development More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

Bristol, England, United Kingdom
Hybrid / WFH Options
Leonardo
exciting, and critical challenges to the UK’s digital landscape. This role requires strong expertise in building and managing data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi. The successful candidate will design, implement, and maintain scalable, secure data solutions, ensuring compliance with strict security standards and regulations. This is a UK-based onsite role with … the option of compressed hours. The role will include: Design, develop, and maintain secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi. Implement data ingestion, transformation, and integration processes, ensuring data quality and security. Collaborate with data architects and security teams to ensure compliance with security policies and data governance standards. Manage and … experience working as a Data Engineer in secure or regulated environments Expertise in the Elastic Stack (Elasticsearch, Logstash, Kibana) for data ingestion, transformation, indexing, and visualization Strong experience with Apache NiFi for building and managing complex data flows and integration processes Knowledge of security practices for handling sensitive data, including encryption, anonymization, and access control Familiarity with data governance More ❯
Posted:

Solutions Architect (Data Analytics)- Presales, RFP creation

City of London, England, United Kingdom
JR United Kingdom
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for Azure More ❯
Posted:

Snowflake Architect

Basildon, England, United Kingdom
TestYantra Software Solutions
data warehousing concepts and data modeling . Excellent problem-solving and communication skills focused on delivering high-quality solutions. Understanding or hands-on experience with orchestration tools such as Apache Airflow . Deep knowledge of non-functional requirements such as availability , scalability , operability , and maintainability . #J-18808-Ljbffr More ❯
Posted:

Data Engineer

London, England, United Kingdom
Sporty
relational and NoSQL databases. Experience with data modelling. General understanding of data architectures and event-driven architectures. Proficient in SQL. Familiarity with one scripting language, preferably Python. Experience with Apache Airflow & Apache Spark. Solid understanding of cloud data services: AWS services such as S3, Athena, EC2, RedShift, EMR (Elastic MapReduce), EKS, RDS (Relational Database Services) and Lambda. Nice More ❯
Posted:

Data Engineer - Manager

London, United Kingdom
Cloud Decisions
Azure, AWS, GCP) Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, Apache Flink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:
Apache
10th Percentile
£37,574
25th Percentile
£60,875
Median
£110,000
75th Percentile
£122,500
90th Percentile
£138,750