teams to build scalable data pipelines and contribute to digital transformation initiatives across government departments. Key Responsibilities Design, develop and maintain robust data pipelines using PostgreSQL and Airflow or ApacheSpark Collaborate with frontend/backend developers using Node.js or React Implement best practices in data modelling, ETL processes and performance optimisation Contribute to containerised deployments (Docker/… within Agile teams and support DevOps practices What We're Looking For Proven experience as a Data Engineer in complex environments Strong proficiency in PostgreSQL and either Airflow or Spark Solid understanding of Node.js or React for integration and tooling Familiarity with containerisation technologies (Docker/Kubernetes) is a plus Excellent communication and stakeholder engagement skills Experience working within More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Peaple Talent
a focus on having delivered in Microsoft Azure Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as ApacheSpark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure Data Engineering certifications Databricks certifications What's in it for you: 📍Location: London More ❯
a focus on having delivered in Microsoft Azure Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as ApacheSpark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure Data Engineering certifications Databricks certifications What's in it for you: 📍Location: London More ❯
london, south east england, united kingdom Hybrid / WFH Options
Peaple Talent
a focus on having delivered in Microsoft Azure Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as ApacheSpark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure Data Engineering certifications Databricks certifications What's in it for you: 📍Location: London More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Peaple Talent
a focus on having delivered in Microsoft Azure Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as ApacheSpark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure Data Engineering certifications Databricks certifications What's in it for you: 📍Location: London More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Peaple Talent
a focus on having delivered in Microsoft Azure Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as ApacheSpark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure Data Engineering certifications Databricks certifications What's in it for you: 📍Location: London More ❯
Proficiency in one or more programming languages including Java, Python, Scala or Golang. Experience with columnar, analytical cloud data warehouses (e.g., BigQuery, Snowflake, Redshift) and data processing frameworks like ApacheSpark is essential. Experience with cloud platforms like AWS, Azure, or Google Cloud. Strong proficiency in designing, developing, and deploying microservices architecture, with a deep understanding of inter More ❯
Pydantic) for document processing, summarization, and clinical Q&A systems. Develop and optimize predictive models using scikit-learn, PyTorch, TensorFlow, and XGBoost. Design robust data pipelines using tools like Spark and Kafka for real-time and batch processing. Manage ML lifecycle with tools such as Databricks , MLflow , and cloud-native platforms (Azure preferred). Collaborate with engineering teams to More ❯
SOAP APIs , JSON , XML , and flat file processing . Version control experience (eg, Git , SVN ). Solid analytical and problem-solving skills. Experience with big data platforms (eg, Hadoop, Spark) or cloud ETL tools (AWS Glue, Azure Data Factory, etc.). Knowledge of BI tools (eg, Pentaho BA Server, Tableau, Power BI). Familiarity with data governance , metadata management More ❯
Northampton, Northamptonshire, East Midlands, United Kingdom
Experis
It, Express>It, Metadata Hub, and PDL. Hands-on experience with SQL , Unix/Linux shell scripting , and data warehouse concepts . Familiarity with big data ecosystems (Hadoop, Hive, Spark) and cloud platforms (AWS, Azure, GCP) is a plus. Proven ability to troubleshoot complex ETL jobs and resolve performance issues. Experience working with large-scale datasets and enterprise data More ❯
It, Express>It, Metadata Hub, and PDL. Hands-on experience with SQL , Unix/Linux shell scripting , and data warehouse concepts . Familiarity with big data ecosystems (Hadoop, Hive, Spark) and cloud platforms (AWS, Azure, GCP) is a plus. Proven ability to troubleshoot complex ETL jobs and resolve performance issues. Experience working with large-scale datasets and enterprise data More ❯
It, Express>It, Metadata Hub, and PDL. Hands-on experience with SQL , Unix/Linux shell scripting , and data warehouse concepts . Familiarity with big data ecosystems (Hadoop, Hive, Spark) and cloud platforms (AWS, Azure, GCP) is a plus. Proven ability to troubleshoot complex ETL jobs and resolve performance issues. Experience working with large-scale datasets and enterprise data More ❯
demonstrate the following experience: Commercial experience gained in a Data Engineering role on any major cloud platform (Azure, AWS or GCP) Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Some experience with the design, build and maintenance of data pipelines and More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Gemba Advantage
Java, TypeScript, Python, and Go Web libraries and frameworks such as React and Angular Designing, building, and maintaining CI/CD pipelines Big data technologies, such as NiFi, Hadoop, Spark Cloud and containerization technologies such as AWS, OpenShift, Kubernetes, Docker DevOps methodologies, such as infrastructure as code and GitOps Database technologies, e.g. relational databases, Elasticsearch, Mongo Why join Gemba More ❯
Research/Statistics or other quantitative fields. Experience in NLP, image processing and/or recommendation systems. Hands on experience in data engineering, working with big data framework like Spark/Hadoop. Experience in data science for e-commerce and/or OTA. We welcome both local and international applications for this role. Full visa sponsorship and relocation assistance More ❯
City of London, London, United Kingdom Hybrid / WFH Options
ECS
engineering with a strong focus on building scalable data pipelines Expertise in Azure Databricks (7+years) including building and managing ETL pipelines using PySpark or Scala (essential) Solid understanding of ApacheSpark, Delta Lake, and distributed data processing concepts Hands-on experience with Azure Data Lake Storage Gen2, Azure Data Factory, and Azure Synapse Analytics Proficiency in SQL and More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
testing of etl (extract, transform, load) processes and data warehousing. 3. Strong understanding of sql for data querying and validation. 4. Knowledge of big data technologies such as hadoop, spark, or kafka is a plus. 5. Familiarity with scripting languages like python, java, or shell scripting. 6. Excellent analytical and problem-solving skills with a keen attention to detail. More ❯
testing of etl (extract, transform, load) processes and data warehousing. 3. Strong understanding of sql for data querying and validation. 4. Knowledge of big data technologies such as hadoop, spark, or kafka is a plus. 5. Familiarity with scripting languages like python, java, or shell scripting. 6. Excellent analytical and problem-solving skills with a keen attention to detail. More ❯
testing of etl (extract, transform, load) processes and data warehousing. 3. Strong understanding of sql for data querying and validation. 4. Knowledge of big data technologies such as hadoop, spark, or kafka is a plus. 5. Familiarity with scripting languages like python, java, or shell scripting. 6. Excellent analytical and problem-solving skills with a keen attention to detail. More ❯