Good experience with ETL - SSIS, SSRS, T-SQL (On-prem/Cloud) Strong proficiency in SQL and Python for handling complex data problems Hands-on experience with Apache Spark (PySpark or Spark SQL) Experience with the Azure data stack Knowledge of workflow orchestration tools like Apache Airflow Experience with containerisation technologies like Docker Proficiency in dimensional modelling techniques Experience More ❯
Bristol, Somerset, United Kingdom Hybrid / WFH Options
Adecco
approaches Experience with data ingestion and ETL pipelines Curious, adaptable, and a natural problem solver Bonus points for: Experience in financial services, insurance, or reinsurance Familiarity with Databricks, Git, PySpark or SQL Exposure to cyber risk or large-scale modelling environments Ready to Apply for this exciting Data Scientist role? Send your CV to (see below) - I'd love More ❯
experience blending data engineering and data science approaches Curious, adaptable, and a natural problem solver Bonus points for: Experience in financial services, insurance, or reinsurance Familiarity with Databricks, Git, PySpark or SQL Exposure to cyber risk or large-scale modelling environments Ready to Apply for this exciting Data Scientist role? Send your CV to - I'd love to hear More ❯
strategic leader with deep experience and a hands-on approach. You bring: A track record of scaling and leading data engineering initiatives Excellent coding skills (e.g. Python, Java, Spark, PySpark, Scala) Strong AWS expertise and cloud-based data processing Advanced SQL/database skills Delivery management and mentoring abilities Highly Desirable: Familiarity with tools like AWS Glue, Azure Data More ❯
Experience: Strong SQL skills and experience with SSIS, SSRS, SSAS Data warehousing, ETL processes, best practice data management Azure cloud technologies (Synapse, Databricks, Data Factory, Power BI) Python/PySpark Proven ability to work in hybrid data environments Ability to manage and lead on and offshore teams Exceptional stakeholder management and communication skills with the ability to talk to More ❯
change, including optimisation of production processes Great knowledge of Python and in particular, the classic Python Data Science stack (NumPy, pandas, PyTorch, scikit-learn, etc) is required; Familiarity with PySpark is also desirable. A cloud platform experience (e.g Azure, AWS, GCP), were using Azure in the team. Good SQL understanding in practice Capacity and enthusiasm for coaching and mentoring More ❯
south west london, south east england, united kingdom
Mars
change, including optimisation of production processes Great knowledge of Python and in particular, the classic Python Data Science stack (NumPy, pandas, PyTorch, scikit-learn, etc) is required; Familiarity with PySpark is also desirable. A cloud platform experience (e.g Azure, AWS, GCP), were using Azure in the team. Good SQL understanding in practice Capacity and enthusiasm for coaching and mentoring More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
term data strategy with a strong focus on data integrity and GDPR compliance To be successful in the role you will have Hands-on coding experience with Python or PySpark Proven expertise in building data pipelines using Azure Data Factory or Fabric Pipelines Solid experience with Azure technologies like Lakehouse Architecture, Data Lake, Delta Lake, and Azure Synapse Strong More ❯
term data strategy with a strong focus on data integrity and GDPR compliance To be successful in the role you will have Hands-on coding experience with Python or PySpark Proven expertise in building data pipelines using Azure Data Factory or Fabric Pipelines Solid experience with Azure technologies like Lakehouse Architecture, Data Lake, Delta Lake, and Azure Synapse Strong More ❯
I am currently on the lookout for a Contract AWS Data Engineer with a scale-up who have number of greenfield projects coming up. Tech Stack: AWS Databricks Lakehouse PySpark SQL ClickHouse/MySQL/DynamoDB If you are interested in this position, please click apply with an updated copy of you CV and I will call you to More ❯
Office) I am currently on the lookout for a Contract AWS Data Engineer with a scale-up who have number of greenfield projects coming up.Tech Stack: AWS Databricks Lakehouse PySpark SQL ClickHouse/MySQL/DynamoDB If you are interested in this position, please click apply with an updated copy of you CV and I will call you to More ❯
Data Engineer | Bristol/Hybrid | £65,000 - £80,000 | AWS | Snowflake | Glue | Redshift | Athena | S3 | Lambda | Pyspark | Python | SQL | Kafka | Amazon Web Services | Do you want to work on projects that actually help people? Or maybe you want to work on a modern AWS stack I am currently supporting a brilliant company in Bristol who build software which genuinely … pipelines using AWS services. implementing data validation, quality checks, and lineage tracking across pipelines, automate data workflows and integrate data from various sources.Tech you will use and learn – Python, Pyspark, AWS, Lambda, S3, DynamoDB, CI/CD, Kafka and more.This is Hybrid role in Bristol and you also get a bonus and generous holiday entitlement to name a couple … you be interested in finding out more? If so apply to the role or send your CV to Sponsorship isn’t available. AWS | Snowflake | Glue | Redshift | Athena | S3 | Lambda | Pyspark | Python | SQL | Kafka | Amazon Web Services More ❯
teams to deliver robust, trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and … Mentor engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions More ❯
teams to deliver robust, trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and … Mentor engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions More ❯
teams to deliver robust, trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and … Mentor engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions More ❯
teams to deliver robust, trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and … Mentor engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions More ❯