data retention and archival strategies in cloud environments. Strong understanding and practical implementation of Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing. Advanced programming skills in Python, PySpark, and SQL, with the ability to build modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake More ❯
Newbury, Berkshire, England, United Kingdom Hybrid / WFH Options
Intuita
including Azure DevOps or GitHub Considerable experience designing and building operationally efficient pipelines, utilising core Cloud components, such as Azure Data Factory, Big Query, AirFlow, Google Cloud Composer and Pyspark etc Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use Strong understanding and or use of More ❯
Microsoft Azure data services (Azure Data Factory, Azure Data Fabric, Azure Synapse Analytics, Azure SQL Database). Experience building ELT/ETL pipelines and managing data workflows. Proficiency in PySpark, Python, SQL, or Scala. Strong data modelling and relational database knowledge. Solid understanding of GDPR and UK data protection. Preferred Power BI experience. Familiarity with legal industry platforms. Awareness More ❯
engineering and Azure cloud data technologies. You must be confident working across: Azure Data Services, including: Azure Data Factory Azure Synapse Analytics Azure Databricks Microsoft Fabric (desirable) Python and PySpark for data engineering, transformation, and automation ETL/ELT pipelines across diverse structured and unstructured data sources Data lakehouse and data warehouse architecture design Power BI for enterprise-grade More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Hexegic
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
exchange platforms. Knowledge of dynamic pricing models. Experience with Databricks and using it for scalable data processing and machine learning workflows. Experience working with big data technologies (e.g., Spark, PySpark). Experience with online market research methods/products. Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver More ❯
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
Bracknell, Berkshire, United Kingdom Hybrid / WFH Options
Teksystems
in building scalable data solutions that empower market readiness. 3 months initial contract Remote working ( UK based) Inside IR35 Responsibilities Design, develop, and maintain data pipelines using Palantir Foundry, PySpark, and TypeScript. Collaborate with cross-functional teams to integrate data sources and ensure data quality and consistency. Implement robust integration and unit testing strategies to validate data workflows. Engage More ❯
Bracknell, Berkshire, United Kingdom Hybrid / WFH Options
Teksystems
Provide hands-on support for production environments, ensuring the stability and performance of data workflows. Troubleshoot and resolve issues related to data pipelines and integrations built using Palantir Foundry, PySpark, and TypeScript. Collaborate with engineering and business teams to understand requirements and deliver timely solutions. Support and improve continuous integration (CI) processes to streamline deployment and reduce downtime. Communicate More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala (minimum of 2). Extensive Data Engineering hands-on experience (coding, configuration, automation, delivery, monitoring, security). ETL Tools such as Azure Data Fabric (ADF) and Databricks or … UK, and you MUST have the Right to Work in the UK long-term without the need for Company Sponsorship. KEYWORDS Senior Data Engineer, Coding Skills, Spark, Java, Python, PySpark, Scala, ETL Tools, Azure Data Fabric (ADF), Databricks, HDFS, Hadoop, Big Data, Cloudera, Data Lakes, Azure Data, Delta Lake, Data Lake, Databricks Lakehouse, Data Analytics, SQL, Geospatial Data, FME More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant AWS or Azure … the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, Delta Lake, Data Lake Please note that due to a high level More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2) Extensive Data Engineering and Data Analytics hands-on experience Significant AWS hands-on experience Technical Delivery Manager skills Geospatial Data experience (including QGIS … support your well-being and career growth. KEYWORDS Principal Geospatial Data Engineer, Geospatial, GIS, QGIS, FME, AWS, On-Prem Services, Software Engineering, Data Engineering, Data Analytics, Spark, Java, Python, PySpark, Scala, ETL Tools, AWS Glue. Please note, to be considered for this role you MUST reside/live in the UK, and you MUST have the Right to Work More ❯
Employment Type: Temporary
Salary: £80000 - £500000/annum Pension, Good Holiday, Insurances