robust way possible! Diverse training opportunities and social benefits (e.g. UK pension schema) What do you offer? Strong hands-on experience working with modern Big Data technologies such as ApacheSpark, Trino, Apache Kafka, Apache Hadoop, Apache HBase, Apache Nifi, Apache Airflow, Opensearch Proficiency in cloud-native technologies such as containerization and Kubernetes More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Searchability
position, you'll develop and maintain a mix of real-time and batch ETL processes, ensuring accuracy, integrity, and scalability across vast datasets. You'll work with Python, SQL, ApacheSpark, and AWS services such as EMR, Athena, and Lambda to deliver robust, high-performance solutions.You'll also play a key role in optimising data pipeline architecture, supporting … Proven experience as a Data Engineer, with Python & SQL expertise Familiarity with AWS services (or equivalent cloud platforms) Experience with large-scale datasets and ETL pipeline development Knowledge of ApacheSpark (Scala or Python) beneficial Understanding of agile development practices, CI/CD, and automated testing Strong problem-solving and analytical skills Positive team player with excellent communication … required skills) your application to our client in conjunction with this vacancy only. KEY SKILLS:Data Engineer/Python/SQL/AWS/ETL/Data Pipelines/ApacheSpark/EMR/Athena/Lambda/Big Data/Manchester/Hybrid Working More ❯
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
london (city of london), south east england, united kingdom
Vallum Associates
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
and transforming data from various sources to data warehouses. - **Programming Expertise:** A solid understanding of Python, PySpark, and SQL is required to manipulate and analyze data efficiently. - **Knowledge of Spark and Airflow:** In-depth knowledge of ApacheSpark for big data processing and Apache Airflow for orchestrating complex workflows is essential for managing data pipelines. - **Cloud More ❯
two of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD More ❯
a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as ApacheSpark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating More ❯
Employment Type: Permanent
Salary: £80000 - £95000/annum Attractive Bonus and Benefits
West London, London, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as ApacheSpark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating More ❯
West London, London, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as ApacheSpark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating More ❯
Cleared: Required Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
West London, London, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as ApacheSpark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating More ❯
Python Extensive experience with cloud platforms (AWS, GCP, or Azure) Experience with: Data warehousing and lake architectures ETL/ELT pipeline development SQL and NoSQL databases Distributed computing frameworks (Spark, Kinesis etc) Software development best practices including CI/CD, TDD and version control. Containerisation tools like Docker or Kubernetes Experience with Infrastructure as Code tools (e.g. Terraform or More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
databases (SQL Server, MySQL) and NoSQL solutions (MongoDB, Cassandra) Hands-on knowledge of AWS S3 and associated big data services Extensive experience with big data technologies including Hadoop and Spark for large-scale dataset processing Deep understanding of data security frameworks, encryption protocols, access management and regulatory compliance Proven track record building automated, scalable ETL frameworks and data pipeline More ❯
type architectures - Proficiency in writing and optimizing SQL - Knowledge of AWS services including S3, Redshift, EMR, Kinesis and RDS. - Experience with Open Source Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) - Ability to write code in Python, Ruby, Scala or other platform-related Big data technology - Knowledge of professional software engineering practices & best practices for the full software development More ❯
Catalog). Familiarity with Data Mesh, Data Fabric, and product-led data strategies. Expertise in cloud platforms (AWS, Azure, GCP, Snowflake). Technical Skills Proficiency in big data tools (ApacheSpark, Hadoop). Programming knowledge (Python, R, Java) is a plus. Understanding of ETL/ELT, SQL, NoSQL, and data visualisation tools. Awareness of ML/AI integration More ❯
North West London, London, United Kingdom Hybrid / WFH Options
Anson Mccade
knowledge of Kafka , Confluent , and event-driven architecture Hands-on experience with Databricks , Unity Catalog , and Lakehouse architectures Strong architectural understanding across AWS, Azure, GCP , and Snowflake Familiarity with ApacheSpark, SQL/NoSQL databases, and programming (Python, R, Java) Knowledge of data visualisation, DevOps principles, and ML/AI integration into data architectures Strong grasp of data More ❯
data workflows for performance and scalability Contribute to the overall data strategy and architecture 🔹 Tech Stack You’ll be working with: Programming: Python, SQL, Scala/Java Big Data: Spark, Hadoop, Databricks Pipelines: Airflow, Kafka, ETL tools Cloud: AWS, GCP, or Azure (Glue, Redshift, BigQuery, Snowflake) Data Modelling & Warehousing 🔹 What’s on Offer 💷 £80,000pa (Permanent role) 📍 Hybrid More ❯
data workflows for performance and scalability Contribute to the overall data strategy and architecture 🔹 Tech Stack You’ll be working with: Programming: Python, SQL, Scala/Java Big Data: Spark, Hadoop, Databricks Pipelines: Airflow, Kafka, ETL tools Cloud: AWS, GCP, or Azure (Glue, Redshift, BigQuery, Snowflake) Data Modelling & Warehousing 🔹 What’s on Offer 💷 £80,000pa (Permanent role) 📍 Hybrid More ❯
data workflows for performance and scalability Contribute to the overall data strategy and architecture 🔹 Tech Stack You’ll be working with: Programming: Python, SQL, Scala/Java Big Data: Spark, Hadoop, Databricks Pipelines: Airflow, Kafka, ETL tools Cloud: AWS, GCP, or Azure (Glue, Redshift, BigQuery, Snowflake) Data Modelling & Warehousing 🔹 What’s on Offer 💷 £80,000pa (Permanent role) 📍 Hybrid More ❯
data workflows for performance and scalability Contribute to the overall data strategy and architecture 🔹 Tech Stack You’ll be working with: Programming: Python, SQL, Scala/Java Big Data: Spark, Hadoop, Databricks Pipelines: Airflow, Kafka, ETL tools Cloud: AWS, GCP, or Azure (Glue, Redshift, BigQuery, Snowflake) Data Modelling & Warehousing 🔹 What’s on Offer 💷 £80,000pa (Permanent role) 📍 Hybrid More ❯
london (city of london), south east england, united kingdom
Roc Search
data workflows for performance and scalability Contribute to the overall data strategy and architecture 🔹 Tech Stack You’ll be working with: Programming: Python, SQL, Scala/Java Big Data: Spark, Hadoop, Databricks Pipelines: Airflow, Kafka, ETL tools Cloud: AWS, GCP, or Azure (Glue, Redshift, BigQuery, Snowflake) Data Modelling & Warehousing 🔹 What’s on Offer 💷 £80,000pa (Permanent role) 📍 Hybrid More ❯
their growth and development Apply agile methodologies (Scrum, pair programming, etc.) to deliver value iteratively Essential Skills & Experience Extensive hands-on experience with programming languages such as Python, Scala, Spark, and SQL Strong background in building and maintaining data pipelines and infrastructure In-depth knowledge of cloud platforms and native cloud services (e.g., AWS, Azure, or GCP) Familiarity with More ❯