robust way possible! Diverse training opportunities and social benefits (e.g. UK pension schema) What do you offer? Strong hands-on experience working with modern Big Data technologies such as ApacheSpark, Trino, Apache Kafka, Apache Hadoop, Apache HBase, Apache Nifi, Apache Airflow, Opensearch Proficiency in cloud-native technologies such as containerization and Kubernetes More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Searchability
position, you'll develop and maintain a mix of real-time and batch ETL processes, ensuring accuracy, integrity, and scalability across vast datasets. You'll work with Python, SQL, ApacheSpark, and AWS services such as EMR, Athena, and Lambda to deliver robust, high-performance solutions.You'll also play a key role in optimising data pipeline architecture, supporting … Proven experience as a Data Engineer, with Python & SQL expertise Familiarity with AWS services (or equivalent cloud platforms) Experience with large-scale datasets and ETL pipeline development Knowledge of ApacheSpark (Scala or Python) beneficial Understanding of agile development practices, CI/CD, and automated testing Strong problem-solving and analytical skills Positive team player with excellent communication … required skills) your application to our client in conjunction with this vacancy only. KEY SKILLS:Data Engineer/Python/SQL/AWS/ETL/Data Pipelines/ApacheSpark/EMR/Athena/Lambda/Big Data/Manchester/Hybrid Working More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Randstad Technologies
scalable data pipelines, specifically using the Hadoop ecosystem and related tools. The role will focus on designing, building and maintaining scalable data pipelines using big data hadoop ecosystems and apachespark for large datasets. A key responsibility is to analyse infrastructure logs and operational data to derive insights, demonstrating a strong understanding of both data processing and the … underlying systems. The successful candidate should have the following key skills Experience with Open Data Platform Hands on experience with Python for Scripting ApacheSpark Prior experience of building ETL pipelines Data Modelling 6 Months Contract - Remote Working - £300 to £350 a day Inside IR35 If you are an experienced Hadoop engineer looking for a new role then More ❯
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
london (city of london), south east england, united kingdom
Vallum Associates
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
Role Title: Infrastructure/Platform Engineer - Apache Duration: 9 Months Location: Remote Rate: £ - Umbrella only Would you like to join a global leader in consulting, technology services and digital transformation? Our client is at the forefront of innovation to address the entire breadth of opportunities in the evolving world of cloud, digital and platforms. Role purpose/summary ? Refactor … prototype Spark jobs into production-quality components, ensuring scalability, test coverage, and integration readiness. ? Package Spark workloads for deployment via Docker/Kubernetes and integrate with orchestration systems (e.g., Airflow, custom schedulers). ? Work with platform engineers to embed Spark jobs into InfoSum's platform APIs and data pipelines. ? Troubleshoot job failures, memory and resource issues, and … execution anomalies across various runtime environments. ? Optimize Spark job performance and advise on best practices to reduce cloud compute and storage costs. ? Guide engineering teams on choosing the right execution strategies across AWS, GCP, and Azure. ? Provide subject matter expertise on using AWS Glue for ETL workloads and integration with S3 and other AWS-native services. ? Implement observability tooling More ❯
and transforming data from various sources to data warehouses. - **Programming Expertise:** A solid understanding of Python, PySpark, and SQL is required to manipulate and analyze data efficiently. - **Knowledge of Spark and Airflow:** In-depth knowledge of ApacheSpark for big data processing and Apache Airflow for orchestrating complex workflows is essential for managing data pipelines. - **Cloud More ❯
two of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD More ❯
a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as ApacheSpark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating More ❯
Employment Type: Permanent
Salary: £80000 - £95000/annum Attractive Bonus and Benefits
West London, London, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as ApacheSpark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating More ❯
West London, London, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as ApacheSpark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating More ❯
Cleared: Required Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
West London, London, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as ApacheSpark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating More ❯
Python Extensive experience with cloud platforms (AWS, GCP, or Azure) Experience with: Data warehousing and lake architectures ETL/ELT pipeline development SQL and NoSQL databases Distributed computing frameworks (Spark, Kinesis etc) Software development best practices including CI/CD, TDD and version control. Containerisation tools like Docker or Kubernetes Experience with Infrastructure as Code tools (e.g. Terraform or More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
databases (SQL Server, MySQL) and NoSQL solutions (MongoDB, Cassandra) Hands-on knowledge of AWS S3 and associated big data services Extensive experience with big data technologies including Hadoop and Spark for large-scale dataset processing Deep understanding of data security frameworks, encryption protocols, access management and regulatory compliance Proven track record building automated, scalable ETL frameworks and data pipeline More ❯
type architectures - Proficiency in writing and optimizing SQL - Knowledge of AWS services including S3, Redshift, EMR, Kinesis and RDS. - Experience with Open Source Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) - Ability to write code in Python, Ruby, Scala or other platform-related Big data technology - Knowledge of professional software engineering practices & best practices for the full software development More ❯
Catalog). Familiarity with Data Mesh, Data Fabric, and product-led data strategies. Expertise in cloud platforms (AWS, Azure, GCP, Snowflake). Technical Skills Proficiency in big data tools (ApacheSpark, Hadoop). Programming knowledge (Python, R, Java) is a plus. Understanding of ETL/ELT, SQL, NoSQL, and data visualisation tools. Awareness of ML/AI integration More ❯
North West London, London, United Kingdom Hybrid / WFH Options
Anson Mccade
knowledge of Kafka , Confluent , and event-driven architecture Hands-on experience with Databricks , Unity Catalog , and Lakehouse architectures Strong architectural understanding across AWS, Azure, GCP , and Snowflake Familiarity with ApacheSpark, SQL/NoSQL databases, and programming (Python, R, Java) Knowledge of data visualisation, DevOps principles, and ML/AI integration into data architectures Strong grasp of data More ❯
their growth and development Apply agile methodologies (Scrum, pair programming, etc.) to deliver value iteratively Essential Skills & Experience Extensive hands-on experience with programming languages such as Python, Scala, Spark, and SQL Strong background in building and maintaining data pipelines and infrastructure In-depth knowledge of cloud platforms and native cloud services (e.g., AWS, Azure, or GCP) Familiarity with More ❯
Reading, England, United Kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
slough, south east england, united kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
Sunbury-On-Thames, London, United Kingdom Hybrid / WFH Options
BP Energy
AWS, Azure) and containerisation (Docker, Kubernetes). Familiarity with MLOps practices and tools (e.g., MLflow, SageMaker, Airflow). Experience working with large-scale datasets and distributed computing frameworks (e.g., Spark). Strong communication skills and ability to work collaboratively in a team environment. MSc or PhD in Computer Science, Engineering, Mathematics, or a related field. Desirable Skills Experience with More ❯