Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2) Extensive Data Engineering and Data Analytics hands-on experience Significant AWS hands-on experience Technical Delivery Manager skills Geospatial Data experience (including QGIS … support your well-being and career growth. KEYWORDS Principal Geospatial Data Engineer, Geospatial, GIS, QGIS, FME, AWS, On-Prem Services, Software Engineering, Data Engineering, Data Analytics, Spark, Java, Python, PySpark, Scala, ETL Tools, AWS Glue. Please note, to be considered for this role you MUST reside/live in the UK, and you MUST have the Right to Work More ❯
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our … modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No … Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data More ❯
Cardiff, South Glamorgan, Wales, United Kingdom Hybrid / WFH Options
Octad Recruitment Consultants (Octad Ltd )
and clients Required Skills & Experience Must-Haves: 3+ years of hands-on Azure engineering experience (IaaS ? PaaS), including Infra as Code. Strong SQL skills and proficiency in Python or PySpark . Built or maintained data lakes/warehouses using Synapse , Fabric , Databricks , Snowflake , or Redshift . Experience hardening cloud environments (NSGs, identity, Defender). Demonstrated automation of backups, CI … their Azure data lake using Synapse , Fabric , or an alternative strategy. Ingest data from core platforms: NetSuite , HubSpot , and client RFP datasets. Automate data pipelines using ADF , Fabric Dataflows , PySpark , or SQL . Publish governed datasets with Power BI , enabling row-level security (RLS). By Year-End: Deliver a production-ready lakehouse powering BI and ready for AI More ❯
Cardiff, South Glamorgan, Wales, United Kingdom Hybrid / WFH Options
Octad Recruitment Ltd
and clients Required Skills & Experience Must-Haves: 3+ years of hands-on Azure engineering experience (IaaS PaaS), including Infra as Code. Strong SQL skills and proficiency in Python or PySpark . Built or maintained data lakes/warehouses using Synapse , Fabric , Databricks , Snowflake , or Redshift . Experience hardening cloud environments (NSGs, identity, Defender). Demonstrated automation of backups, CI … their Azure data lake using Synapse , Fabric , or an alternative strategy. Ingest data from core platforms: NetSuite , HubSpot , and client RFP datasets. Automate data pipelines using ADF , Fabric Dataflows , PySpark , or SQL . Publish governed datasets with Power BI , enabling row-level security (RLS). By Year-End: Deliver a production-ready lakehouse powering BI and ready for AI More ❯
Data Engineer (Databricks) - Leeds (Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Data Engineer) Our client is a global innovator and world leader with one of the most recognisable names within technology. They are looking for … Data Engineers with significant Databricks experience to join an exceptional Agile engineering team. We are seeking a Data Engineer with strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No-SQL - Aurora, MS SQL Server, MySQL is … top performers. Location: Leeds Salary: £40k - £50k + Pension + Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Data Engineer More ❯
Azure Data Engineer - 1/2 days onsite Summary: Join a team building a modern Azure-based data platform. This hands-on engineering role involves designing and developing scalable, automated data pipelines using tools like Data Factory, Databricks, Synapse, and More ❯
make your mark in global asset management and make an impact on the world's investors What you will be responsible for Design, develop and maintain applications leveraging Python, PySpark, and SQL leveraging cloud native architecture principles and technologies. Lead the design and development of scalable and robust software application architecture, ensuring alignment with business goals and industry best … tools and technologies to improve our technology stack. Develop oneself into a Subject Matter Expert (SME) on Technical and Functional domain areas. What we value Demonstrated experience in Python, PySpark and SQL (AWS Redshift, Postgres, Oracle). Demonstrated experience building data pipelines with PySpark and AWS. Application development experience in financial services with hands on designing, developing, and More ❯
Oracle and integrated with Databricks Spark on AWS. Write efficient, production-quality SQL and PL/SQL queries for data extraction and transformation in Oracle. Leverage Databricks Spark and PySpark to process large datasets and build machine learning models in a distributed environment. Collaborate closed with business stakeholders to understand data requirements and translate them into technical solutions. Ensure … data security). Working knowledge of AWS core services, including S3, EC2/EMR, IAM, Athena, Glue or Redshift. Hands-on experience with Databricks Spark on large datasets, using PySpark, Scala, or SQL. Familiarity with Delta Lake, Unity Catalog or similar data lakehouse technologies. Proficient in Linux environments, including experience with shell scripting, basic system operations, and navigating file More ❯
real urgency, and real interest in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising Azure-based data pipelines using Databricks, PySpark, ADF, and Delta Lake Implementing a medallion architecture - from raw to curated Collaborating with analysts to make data business-ready Applying CI/CD and DevOps best practices (Git … time logistics datasets What they're looking for: A strong communicator - someone who can build relationships and help connect silos Experience building pipelines in Azure using Databricks, ADF, and PySpark Strong SQL and Python skills Bonus points if you've worked with Power BI, Azure Purview, or streaming tools You're versatile - happy to support analysts and wear multiple More ❯
Data Engineer | Bristol/Hybrid | £65,000 - £80,000 | AWS | Snowflake | Glue | Redshift | Athena | S3 | Lambda | Pyspark | Python | SQL | Kafka | Amazon Web Services | Do you want to work on projects that actually help people? Or maybe you want to work on a modern AWS stack I am currently supporting a brilliant company in Bristol who build software which genuinely … pipelines using AWS services. implementing data validation, quality checks, and lineage tracking across pipelines, automate data workflows and integrate data from various sources.Tech you will use and learn – Python, Pyspark, AWS, Lambda, S3, DynamoDB, CI/CD, Kafka and more.This is Hybrid role in Bristol and you also get a bonus and generous holiday entitlement to name a couple … you be interested in finding out more? If so apply to the role or send your CV to Sponsorship isn’t available. AWS | Snowflake | Glue | Redshift | Athena | S3 | Lambda | Pyspark | Python | SQL | Kafka | Amazon Web Services More ❯
Data Engineer | Data Consultant | Azure | Fabric | Python | SQL | PySpark Senior Data Engineer - Up to £70,000 London - 3 days in-office Method Resourcing are thrilled to be partnering with a Microsoft Solutions Partner to support them in hiring a Data Consultant to focus on and specialise on their current and upcoming Fabric projects. This is a fantastic time to … is offering a salary of up to £70,000 dependent on experience + Bonus & Benefits. Please apply now for immediate consideration. Data Engineer | Data Consultant | Azure | Fabric | Python | SQL | PySpark Senior Data Engineer - Up to £70,000 London - 3 days in-office RSG Plc is acting as an Employment Agency in relation to this vacancy. More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Parking Network BV
For airports, for partners, for people. We are CAVU. At CAVU our purpose is to find new and better ways to make airport travel seamless and enjoyable for everybody. From the smallest ideas to the biggest changes. Every day here More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
MAG (Airports Group)
Press Tab to Move to Skip to Content Link Select how often (in days) to receive an alert: For airports, for partners, for people. We are CAVU. At CAVU our purpose is to find new and better ways to make More ❯
and managing project changes and interventions to achieve project outputs. Documenting all aspects of the project for future reference and audits. Technical Responsibilities: Developing SQL scripts (store procedures) and PySpark notebooks. Creating and managing ingestion, ETL & ELT processes. Designing and configuring Synapse pipelines. Data modelling in various storage systems. Analysing existing data designs and suggesting improvements for performance, stability … Experience in Project Management within the Defence & Security sector. Strong technical skills in API, Java, Python, Web Development, SQL, and Azure. Proficiency in developing and managing SQL scripts and PySpark notebooks. Understanding of ETL & ELT processes and Synapse pipeline design and configuration. Experience in data modelling and improving existing data designs. Knowledge of real-time data processing. Capable of More ❯
and managing project changes and interventions to achieve project outputs. Documenting all aspects of the project for future reference and audits. Technical Responsibilities: Developing SQL scripts (store procedures) and PySpark notebooks. Creating and managing ingestion, ETL & ELT processes. Designing and configuring Synapse pipelines. Data modelling in various storage systems. Analysing existing data designs and suggesting improvements for performance, stability … Experience in Project Management within the Defence & Security sector. Strong technical skills in API, Java, Python, Web Development, SQL, and Azure. Proficiency in developing and managing SQL scripts and PySpark notebooks. Understanding of ETL & ELT processes and Synapse pipeline design and configuration. Experience in data modelling and improving existing data designs. Knowledge of real-time data processing. Capable of More ❯
practices. This is a fantastic opportunity for a curious, solutions-focused data scientist to help build out our capability, working with cutting-edge tools like Databricks, AWS data services, PySpark, and CI/CD pipelines. What's in it for you? You'll be joining a collaborative, supportive team with a real passion for data-led innovation. It's … business impact - we'd love to hear from you. About you: 2-5 years of experience in Data Science or a related field Strong programming skills in Python and PySpark Strong data science modelling skills across classification, regression, forecasting, and/or NLP Analytical mindset with the ability to present insights to both technical and non-technical audiences Experience More ❯
practices. This is a fantastic opportunity for a curious, solutions-focused data scientist to help build out our capability, working with cutting-edge tools like Databricks, AWS data services, PySpark, and CI/CD pipelines. What's in it for you? You'll be joining a collaborative, supportive team with a real passion for data-led innovation. It's … we can reach new heights. Together, we are CAVU. About You: 2-5 years of experience in Data Science or a related field Strong programming skills in Python and PySpark Strong data science modelling skills across classification, regression, forecasting, and/or NLP Analytical mindset with the ability to present insights to both technical and non-technical audiences Experience More ❯
teams to deliver robust, trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and … Mentor engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions More ❯
teams to deliver robust, trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and … Mentor engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions More ❯
teams to deliver robust, trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and … Mentor engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions More ❯
teams to deliver robust, trusted, and timely data solutions that power advanced analytics and business intelligence. What You'll Do: Architect and build scalable data pipelines using Microsoft Fabric, PySpark, and T-SQL Lead the development of Star Schema Lakehouse tables to support BI and self-service analytics Collaborate with stakeholders to translate business needs into data models and … Mentor engineers and act as a technical leader within the team Ensure data integrity, compliance, and performance across the platform What You'll Bring: Expertise in Microsoft Fabric, Azure, PySpark, SparkSQL, and modern data engineering practices Strong experience with Lakehouse architectures, data orchestration, and real-time analytics A pragmatic, MVP-driven mindset with a passion for scalable, maintainable solutions More ❯
Leeds, Yorkshire, United Kingdom Hybrid / WFH Options
PEXA Group Limited
the transformation pipeline from start to finish, guaranteeing that datasets are robust, tested, secure, and business-ready. Our data platform is built using Databricks, with data pipelines written in PySpark and orchestrated using Airflow. You will be expected to challenge and improve current transformations, ensuring they meet our performance, scalability, and data governance needs. This includes work with complex … days per year for meaningful collaboration in either Leeds or Thame. Key Responsibilities Ensure end-to-end data quality, from raw ingested data to business-ready datasets Optimise PySpark-based data transformation logic for performance and reliability Build scalable and maintainable pipelines in Databricks and Airflow Implement and uphold GDPR-compliant processes around PII data Collaborate with stakeholders to … management, metadata management, and wider data governance practices Help shape our approach to reliable data delivery for internal and external customers Skills & Experience Required Extensive hands-on experience with PySpark, including performance optimisation Deep working knowledge of Databricks (development, architecture, and operations) Proven experience working with Airflow for orchestration Proven track record in managing and securing PII data, with More ❯
TECHNICAL PROGRAMME MANAGER - DATA INGESTION (PHARMA/SNOWFLAKE) UP TO £560 PER DAY HYBRID (1/2 DAYS PER WEEK IN SPAIN & GERMANY) 6 MONTHS THE COMPANY: A global data and analytics consultancy are delivering a large-scale data ingestion More ❯