Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Searchability
position, you'll develop and maintain a mix of real-time and batch ETL processes, ensuring accuracy, integrity, and scalability across vast datasets. You'll work with Python, SQL, ApacheSpark, and AWS services such as EMR, Athena, and Lambda to deliver robust, high-performance solutions.You'll also play a key role in optimising data pipeline architecture, supporting … Proven experience as a Data Engineer, with Python & SQL expertise Familiarity with AWS services (or equivalent cloud platforms) Experience with large-scale datasets and ETL pipeline development Knowledge of ApacheSpark (Scala or Python) beneficial Understanding of agile development practices, CI/CD, and automated testing Strong problem-solving and analytical skills Positive team player with excellent communication … required skills) your application to our client in conjunction with this vacancy only. KEY SKILLS:Data Engineer/Python/SQL/AWS/ETL/Data Pipelines/ApacheSpark/EMR/Athena/Lambda/Big Data/Manchester/Hybrid Working More ❯
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
london (city of london), south east england, united kingdom
Vallum Associates
5+ years of experience in Data Engineering. Strong hands-on experience with Hadoop (HDFS, Hive, etc.). Proficient in Python scripting for data transformation and orchestration. Working experience with ApacheSpark (including Spark Streaming). Solid knowledge of Apache Airflow for pipeline orchestration. Exposure to infrastructure data analytics or monitoring data is highly preferred. Excellent problem More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
databases (SQL Server, MySQL) and NoSQL solutions (MongoDB, Cassandra) Hands-on knowledge of AWS S3 and associated big data services Extensive experience with big data technologies including Hadoop and Spark for large-scale dataset processing Deep understanding of data security frameworks, encryption protocols, access management and regulatory compliance Proven track record building automated, scalable ETL frameworks and data pipeline More ❯
optimising large-scale data systems Expertise in cloud-based data platforms (AWS, Azure, Google Cloud) and distributed storage solutions Proficiency in Python, PySpark, SQL, NoSQL, and data processing frameworks (Spark, Databricks) Expertise in ETL/ELT design and orchestration in Azure, as well as pipeline performance tuning & optimisation Competent in integrating relational, NoSQL, and streaming data sources Management of More ❯
Catalog). Familiarity with Data Mesh, Data Fabric, and product-led data strategies. Expertise in cloud platforms (AWS, Azure, GCP, Snowflake). Technical Skills Proficiency in big data tools (ApacheSpark, Hadoop). Programming knowledge (Python, R, Java) is a plus. Understanding of ETL/ELT, SQL, NoSQL, and data visualisation tools. Awareness of ML/AI integration More ❯
North West London, London, United Kingdom Hybrid / WFH Options
Anson Mccade
knowledge of Kafka , Confluent , and event-driven architecture Hands-on experience with Databricks , Unity Catalog , and Lakehouse architectures Strong architectural understanding across AWS, Azure, GCP , and Snowflake Familiarity with ApacheSpark, SQL/NoSQL databases, and programming (Python, R, Java) Knowledge of data visualisation, DevOps principles, and ML/AI integration into data architectures Strong grasp of data More ❯
data workflows for performance and scalability Contribute to the overall data strategy and architecture 🔹 Tech Stack You’ll be working with: Programming: Python, SQL, Scala/Java Big Data: Spark, Hadoop, Databricks Pipelines: Airflow, Kafka, ETL tools Cloud: AWS, GCP, or Azure (Glue, Redshift, BigQuery, Snowflake) Data Modelling & Warehousing 🔹 What’s on Offer 💷 £80,000pa (Permanent role) 📍 Hybrid More ❯
data workflows for performance and scalability Contribute to the overall data strategy and architecture 🔹 Tech Stack You’ll be working with: Programming: Python, SQL, Scala/Java Big Data: Spark, Hadoop, Databricks Pipelines: Airflow, Kafka, ETL tools Cloud: AWS, GCP, or Azure (Glue, Redshift, BigQuery, Snowflake) Data Modelling & Warehousing 🔹 What’s on Offer 💷 £80,000pa (Permanent role) 📍 Hybrid More ❯
london (city of london), south east england, united kingdom
Roc Search
data workflows for performance and scalability Contribute to the overall data strategy and architecture 🔹 Tech Stack You’ll be working with: Programming: Python, SQL, Scala/Java Big Data: Spark, Hadoop, Databricks Pipelines: Airflow, Kafka, ETL tools Cloud: AWS, GCP, or Azure (Glue, Redshift, BigQuery, Snowflake) Data Modelling & Warehousing 🔹 What’s on Offer 💷 £80,000pa (Permanent role) 📍 Hybrid More ❯
Reading, England, United Kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
slough, south east england, united kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
Reigate, Surrey, England, United Kingdom Hybrid / WFH Options
esure Group
exposure to cloud-native data infrastructures (Databricks, Snowflake) especially in AWS environments is a plus Experience in building and maintaining batch and streaming data pipelines using Kafka, Airflow, or Spark Familiarity with governance frameworks, access controls (RBAC), and implementation of pseudonymisation and retention policies Exposure to enabling GenAI and ML workloads by preparing model-ready and vector-optimised datasets More ❯
Sunbury-On-Thames, London, United Kingdom Hybrid / WFH Options
BP Energy
AWS, Azure) and containerisation (Docker, Kubernetes). Familiarity with MLOps practices and tools (e.g., MLflow, SageMaker, Airflow). Experience working with large-scale datasets and distributed computing frameworks (e.g., Spark). Strong communication skills and ability to work collaboratively in a team environment. MSc or PhD in Computer Science, Engineering, Mathematics, or a related field. Desirable Skills Experience with More ❯
guildford, south east england, united kingdom Hybrid / WFH Options
BP Energy
AWS, Azure) and containerisation (Docker, Kubernetes). Familiarity with MLOps practices and tools (e.g., MLflow, SageMaker, Airflow). Experience working with large-scale datasets and distributed computing frameworks (e.g., Spark). Strong communication skills and ability to work collaboratively in a team environment. MSc or PhD in Computer Science, Engineering, Mathematics, or a related field. Desirable Skills Experience with More ❯
sunbury, south east england, united kingdom Hybrid / WFH Options
BP Energy
AWS, Azure) and containerisation (Docker, Kubernetes). Familiarity with MLOps practices and tools (e.g., MLflow, SageMaker, Airflow). Experience working with large-scale datasets and distributed computing frameworks (e.g., Spark). Strong communication skills and ability to work collaboratively in a team environment. MSc or PhD in Computer Science, Engineering, Mathematics, or a related field. Desirable Skills Experience with More ❯
Bedford, Bedfordshire, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
source systems into our reporting solutions. Pipeline Development: Develop and configure meta-data driven data pipelines using data orchestration tools such as Azure Data factory and engineering tools like ApacheSpark to ensure seamless data flow. Monitoring and Failure Recovery: Implement monitoring procedures to detect failures or unusual data profiles and establish recovery processes to maintain data integrity. … in Azure data tooling such as Synapse Analytics, Microsoft Fabric, Azure Data Lake Storage/One Lake, and Azure Data Factory. Understanding of data extraction from vendor REST APIs. Spark/Pyspark or Python skills a bonus or a willingness to develop these skills. Experience with monitoring and failure recovery in data pipelines. Excellent problem-solving skills and attention More ❯
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Fruition Group
best practices for data security and compliance. Collaborate with stakeholders and external partners. Skills & Experience: Strong experience with AWS data technologies (e.g., S3, Redshift, Lambda). Proficient in Python, ApacheSpark, and SQL. Experience in data warehouse design and data migration projects. Cloud data platform development and deployment. Expertise across data warehouse and ETL/ELT development in More ❯
technology fundamentals and experience with languages like Python, or functional programming languages like Scala Demonstrated experience in design and development of big data applications using tech stacks like Databricks, ApacheSpark, HDFS, HBase and Snowflake Commendable skills in building data products, by integrating large sets of data from hundreds of internal and external sources would be highly critical More ❯
technology fundamentals and experience with languages like Python, or functional programming languages like Scala Demonstrated experience in design and development of big data applications using tech stacks like Databricks, ApacheSpark, HDFS, HBase and Snowflake Commendable skills in building data products, by integrating large sets of data from hundreds of internal and external sources would be highly critical More ❯
london (city of london), south east england, united kingdom
Sahaj Software
technology fundamentals and experience with languages like Python, or functional programming languages like Scala Demonstrated experience in design and development of big data applications using tech stacks like Databricks, ApacheSpark, HDFS, HBase and Snowflake Commendable skills in building data products, by integrating large sets of data from hundreds of internal and external sources would be highly critical More ❯
Strong capability in documenting high-level designs, system workflows, and technical specifications. Deep understanding of architectural patterns (eg, microservices, layered, serverless). Expert knowledge of big data tools (Kafka, Spark, Hadoop) and major cloud platforms (AWS, Azure, GCP) to inform technology recommendations. Proficiency in Java, C#, Python, or JavaScript/TypeScript. Experience with cloud platforms (AWS, Azure, Google Cloud More ❯
Strong capability in documenting high-level designs, system workflows, and technical specifications. Deep understanding of architectural patterns (e.g., microservices, layered, serverless). Expert knowledge of big data tools (Kafka, Spark, Hadoop) and major cloud platforms (AWS, Azure, GCP) to inform technology recommendations. Proficiency in Java, C#, Python, or JavaScript/TypeScript. Experience with cloud platforms (AWS, Azure, Google Cloud More ❯
observability. Preferred Qualifications Exposure to machine learning workflows, model lifecycle management, or data engineering platforms. Experience with distributed systems, event-driven architectures (e.g., Kafka), and big data platforms (e.g., Spark, Databricks). Familiarity with banking or financial domain use cases, including data governance and compliance-focused development. Knowledge of platform security, monitoring, and resilient architecture patterns. More ❯