London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
reviews and continuous improvement initiatives Essential Skills for the AWS Data Engineer: Extensive hands-on experience with AWS data services Strong programming skills in Python (including libraries such as PySpark or Pandas) Solid understanding of data modelling, warehousing and architecture design within cloud environments Experience building and managing ETL/ELT workflows and data pipelines at scale Proficiency with More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
from APIs, databases, and financial data sources into Azure Databricks. Optimize pipelines for performance, reliability, and cost, incorporating data quality checks. Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, Delta Lake, Spark SQL, and related services. Apply best More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
Analytics, Databricks, SQL Database, and Azure Storage. Excellent SQL and data modelling (star/snowflake, dimensional modelling). Knowledge of Power BI dataflows, DAX, and RLS. Experience with Python, PySpark, or T-SQL for transformations. Understanding of CI/CD and DevOps (Git, YAML pipelines). Strong grasp of data governance, security, and performance tuning. To apply for this More ❯
Greater Manchester, North West, United Kingdom Hybrid/Remote Options
Searchability (UK) Ltd
Enhanced Maternity & Paternity Charity Volunteer Days Cycle to work scheme And More.. DATA ENGINEER - ESSTENTIAL SKILLS Proven experience building data pipelines using Databricks . Strong understanding of Apache Spark (PySpark or Scala) and Structured Streaming . Experience working with Kafka (MSK) and handling real-time data . Good knowledge of Delta Lake/Delta Live Tables and the Medallion More ❯
S3 Data Lake, and CloudWatch. Strong knowledge of data extraction, transformation, and loading (ETL) processes, leveraging tools such as Talend, Informatica, Matillion, Pentaho, MuleSoft, Boomi, or scripting languages (Python, PySpark, SQL). Solid understanding of data warehousing and data modelling techniques (Star Schema, Snowflake Schema). Familiarity with security frameworks (GDPR, HIPAA, ISO 27001, NIST, SOX, PII) and AWS More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
build, and maintain scalable ETL pipelines to ingest, transform, and load data from diverse sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, Delta Lake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL More ❯
experience as an Azure Data Engineer in enterprise environments. Strong hands-on expertise with Azure Data Factory , Databricks , Synapse , and Azure Data Lake . Proficiency in SQL , Python , and PySpark . Experience with data modelling , ETL optimisation , and cloud migration projects . Familiarity with Agile delivery and CI/CD pipelines. Excellent communication skills for working with technical and More ❯
of working Champion DevOps and CI/CD methodologies to ensure agile collaboration and robust data solutions Engineer and orchestrate data models and pipelines Lead development activities using Python, PySpark and other technologies Write high-quality code that contributes to a scalable and maintainable data platform To be successful in this role, you will need to have the following More ❯
data governance, security, and access control within Databricks. • Provide technical mentorship and guidance to junior engineers. Must-Have Skills: • Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). • Proven track record of building and optimizing data pipelines in cloud environments. • Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM, and VPC. • Proficiency More ❯
hands-on approach to data science, analytics, and ML solutions. Continuously optimise data workflows for performance, reliability, and scalability. What youll need: Proven hands-on experience with Databricks, Python, PySpark, and SQL. Machine learning experience in a cloud environment (AWS, Azure, or GCP). Strong understanding of ML libraries such as scikit-learn, TensorFlow, or MLflow. Solid background in More ❯
Edinburgh, Roxburgh's Court, City of Edinburgh, United Kingdom
Bright Purple
on approach to data science, analytics, and ML solutions. • Continuously optimise data workflows for performance, reliability, and scalability. What you’ll need: • Proven hands-on experience with Databricks, Python, PySpark, and SQL. • Machine learning experience in a cloud environment (AWS, Azure, or GCP). • Strong understanding of ML libraries such as scikit-learn, TensorFlow, or MLflow. • Solid background in More ❯
Reigate, Surrey, England, United Kingdom Hybrid/Remote Options
esure Group
and influence decisions. Strong understanding of data models and analytics; exposure to predictive modelling and machine learning is a plus. Proficient in SQL and Python, with bonus points for PySpark, SparkSQL, and Git. Skilled in data visualisation with tools such as Tableau or Power BI. Confident writing efficient code and troubleshooting sophisticated queries. Clear and adaptable communicator, able to More ❯
Essential Skills Include: Proven leadership and mentoring experience in senior data engineering roles Expertise in Azure Data Factory, Azure Databricks, and lakehouse architecture Strong programming skills (Python, T-SQL, PySpark) and test-driven development Deep understanding of data security, compliance, and tools like Microsoft Purview Excellent communication and stakeholder management skills Experience with containerisation and orchestration (e.g., Kubernetes, Azure More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Oscar Technology
warehousing techniques, including the Kimball Methodology or other similar dimensional modelling standards, is essential to the role. Technical experience building and deploying models and reports utilizing the following tools: PySpark Microsoft Fabric or Databricks Power BI Git CI/CD pipelines (Azure DevOps experience preferred) An understanding of the structure and purpose of the Financial Advice and Wealth Management More ❯
Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, Delta Lake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
data models and transformation pipelines using Databricks, Azure, and Power BI to turn complex datasets into reliable, insight-ready assets. You'll apply strong skills in SQL, Python, and PySpark to build efficient ELT workflows and ensure data quality, performance, and governance. Collaboration will be key as you partner with analysts and business teams to align data models with More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
have: Hands-on experience creating data pipelines using Azure services such as Synapse, Data Factory or Databricks Commercial experience with Microsoft Fabric Strong understanding of SQL and Python/PySpark Experience with Power BI and data modelling Some of the package/role details include: Salary up to £85,000 Flexible hybrid working model (normally once/twice per More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Vermillion Analytics
Collaborate with brilliant behavioural scientists and product teams who'll challenge them in the best ways The ideal candidate will: Know their way around AWS data tools (Glue/PySpark, Athena) and Microsoft Fabric Be able to write clean Python and SQL in their sleep Have battle scars from integrating CRMs (HubSpot, Salesforce) via APIs Actually care about data More ❯
explain commercial impact. Understanding of ML Ops vs DevOps and broader software engineering standards. Cloud experience (any platform). Previous mentoring experience. Nice to have: Snowflake or Databricks Spark, PySpark, Hadoop or similar big data tooling BI exposure (PowerBI, Tableau, etc.) Interview Process Video call - high-level overview and initial discussion In-person technical presentation - based on a provided More ❯
data engineering, Python development, and cloud-native architecture. YOUR PROFILE Design, develop, and maintain robust data pipelines and ETL workflows using AWS services. • Implement scalable data processing solutions using PySpark and AWS Glue. • Build and manage infrastructure as code using CloudFormation. • Develop and deploy serverless applications using AWS Lambda, Step Functions, and S3. • Perform data querying and analysis using More ❯
involve: Supporting the BI reporting team by creating and maintaining data solutions for KPI reporting. Developing scalable, performance-optimised ELT/ETL pipelines using T-SQL, Python, ADO, C#, PySpark and Jupyter Notebooks. Working with the Gazetteer and GIS teams to maintain stable, consolidated database platforms for mapping and GIS systems. Contributing to the development and maintenance of a More ❯
based modelling workflows and PR reviews via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, Delta Lake, metadata-driven pipelines. Unity Catalog/Purview experience for lineage and governance. RLS/OLS implementation experience. More ❯
based modelling workflows and PR reviews via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, Delta Lake, metadata-driven pipelines. Unity Catalog/Purview experience for lineage and governance. RLS/OLS implementation experience. More ❯
Nottingham, Nottinghamshire, England, United Kingdom
E.ON
the perfect match? Proven experience in a data analytics or credit risk role, ideally within utilities, financial services or other regulated industry Strong coding skills in SQL, Python and PySpark for data extraction, transformation, modeling and forecasting Solid understanding of forecasting techniques, scenario modelling, and regression-based analytics Strong commercial acumen, with the ability to translate complex analytical findings More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Hays Specialist Recruitment Limited
as a Data Engineer with Active Security Clearance (SC) Strong Python skills with modular, test-driven design Experience with Behave for unit and BDD testing (mocking, patching) Proficiency in PySpark and distributed Data processing Solid understanding of Delta Lake (design and maintenance) Hands-on with Docker for development and deployment Familiarity with Azure services: Functions, Key Vault, Blob Storage More ❯