Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Adecco
approaches Experience with data ingestion and ETL pipelines Curious, adaptable, and a natural problem solver Bonus points for: Experience in financial services, insurance, or reinsurance Familiarity with Databricks, Git, PySpark or SQL Exposure to cyber risk or large-scale modelling environments Ready to Apply for this exciting Data Scientist role? Send your CV to - I'd love to hear More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
term data strategy with a strong focus on data integrity and GDPR compliance To be successful in the role you will have Hands-on coding experience with Python or PySpark Proven expertise in building data pipelines using Azure Data Factory or Fabric Pipelines Solid experience with Azure technologies like Lakehouse Architecture, Data Lake, Delta Lake, and Azure Synapse Strong More ❯
Data Engineer | Bristol/Hybrid | £65,000 - £80,000 | AWS | Snowflake | Glue | Redshift | Athena | S3 | Lambda | Pyspark | Python | SQL | Kafka | Amazon Web Services | Do you want to work on projects that actually help people? Or maybe you want to work on a modern AWS stack I am currently supporting a brilliant company in Bristol who build software which genuinely … pipelines using AWS services. implementing data validation, quality checks, and lineage tracking across pipelines, automate data workflows and integrate data from various sources.Tech you will use and learn – Python, Pyspark, AWS, Lambda, S3, DynamoDB, CI/CD, Kafka and more.This is Hybrid role in Bristol and you also get a bonus and generous holiday entitlement to name a couple … you be interested in finding out more? If so apply to the role or send your CV to Sponsorship isn’t available. AWS | Snowflake | Glue | Redshift | Athena | S3 | Lambda | Pyspark | Python | SQL | Kafka | Amazon Web Services More ❯
teams to deliver robust, trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and … Mentor engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions More ❯
teams to deliver robust, trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and … Mentor engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions More ❯