slough, south east england, united kingdom Hybrid / WFH Options
Publicis Production
data modeling, data warehousing concepts, and distributed systems. Excellent problem-solving skills and ability to progress with design, build and validate output data independently. Deep proficiency in Python (including PySpark), SQL, and cloud-based data engineering tools. Expertise in multiple cloud platforms (AWS, GCP, or Azure) and managing cloud-based data infrastructure. Strong background in database technologies (SQL Server More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
to build robust data pipelines and applications that process complex datasets from multiple operational systems. Key Responsibilities: Build and maintain AWS-based ETL/ELT pipelines using S3, Glue (PySpark/Python), Lambda, Athena, Redshift, and Step Functions Develop backend applications to automate and support compliance reporting Process and validate complex data formats including nested JSON, XML, and CSV More ❯
Microsoft Azure data services (Azure Data Factory, Azure Data Fabric, Azure Synapse Analytics, Azure SQL Database). Experience building ELT/ETL pipelines and managing data workflows. Proficiency in PySpark, Python, SQL, or Scala. Strong data modelling and relational database knowledge. Solid understanding of GDPR and UK data protection. Preferred Power BI experience. Familiarity with legal industry platforms. Awareness More ❯
within Microsoft Azure data tools (Azure Data Factory, Azure Synapse, or Azure SQL). ✅ Dimensional modelling expertise for analytics use cases. ✅ Strong ETL/ELT development skills. ✅ Python/Pyspark experience ✅ Experience with CI/CD methodologies for data platforms. ✅ Deep knowledge of SQL ✅ Extensive London Markets experience Why Join? 🚀 New Projects – Work on a new data platform, shaping More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Amarji
paced and rapidly evolving start-up environment. Skills: Data engineering Microsoft fabric (snowflake, databricks, considered) Power BI DAX API M Query Python SQL Data pipelines/dataflow gen2/pyspark notebooks Data modelling Benefits: - Mainly fully remote position, with the flexibility to work from home or any location that suits you best. - Occasional requirements to visit client sites for More ❯
engineering and Azure cloud data technologies. You must be confident working across: Azure Data Services, including: Azure Data Factory Azure Synapse Analytics Azure Databricks Microsoft Fabric (desirable) Python and PySpark for data engineering, transformation, and automation ETL/ELT pipelines across diverse structured and unstructured data sources Data lakehouse and data warehouse architecture design Power BI for enterprise-grade More ❯
engineering and Azure cloud data technologies. You must be confident working across: Azure Data Services, including: Azure Data Factory Azure Synapse Analytics Azure Databricks Microsoft Fabric (desirable) Python and PySpark for data engineering, transformation, and automation ETL/ELT pipelines across diverse structured and unstructured data sources Data lakehouse and data warehouse architecture design Power BI for enterprise-grade More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Hexegic
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
exchange platforms. Knowledge of dynamic pricing models. Experience with Databricks and using it for scalable data processing and machine learning workflows. Experience working with big data technologies (e.g., Spark, PySpark). Experience with online market research methods/products. Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver More ❯
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2) Extensive Data Engineering and Data Analytics hands-on experience Significant AWS hands-on experience Technical Delivery Manager skills Geospatial Data experience (including QGIS … support your well-being and career growth. KEYWORDS Principal Geospatial Data Engineer, Geospatial, GIS, QGIS, FME, AWS, On-Prem Services, Software Engineering, Data Engineering, Data Analytics, Spark, Java, Python, PySpark, Scala, ETL Tools, AWS Glue. Please note, to be considered for this role you MUST reside/live in the UK, and you MUST have the Right to Work More ❯
Proven experience with Fabric implementations (not Snowflake, Databricks, or other platforms) Deep understanding of Fabric ecosystem components and best practices Experience with medallion architecture implementation in Fabric Technical Skills: PySpark: Advanced proficiency in PySpark for data processing Data Engineering: ETL/ELT pipeline development and optimization Real-time Processing: Experience with streaming data and real-time analytics Performance More ❯