with multiple data analytics tools (e.g. Power BI) Deep understanding of data wareho using concepts, ETL/ELT pipelines and dimensional modelling Proficiency in advanced programming languages (Python/PySpark, SQL) Experience in data pipeline orchestration (e.g. Airflow, Data Factory) Familiarity with DevOps and CI/CD practices (Git, Azure DevOps etc) Ability to communicate technical concepts to both More ❯
Central London, London, United Kingdom Hybrid/Remote Options
McCabe & Barton
Storage . Implement governance and security measures across the platform. Leverage Terraform or similar IaC tools for controlled and reproducible deployments. Databricks Development Develop and optimise data jobs using PySpark or Scala within Databricks. Implement the medallion architecture (bronze, silver, gold layers) and use Delta Lake for reliable data transactions. Manage cluster configurations and CI/CD pipelines for More ❯
and prompt engineering. Mandatory Skills: Cloud Platforms:Deep experience with AWS (S3, Lambda, Glue, Redshift) and/or Azure (Data Lake, Synapse). Programming & Scripting:Proficiency in Python, SQL, PySpark etc. ETL/ELT & Streaming:Expertise in technologies like Apache Airflow, Glue, Kafka, Informatica, EventBridge etc. Industrial Data Integration:Familiarity with OT data schema originating from OSIsoft PI, SCADA More ❯
East London, London, United Kingdom Hybrid/Remote Options
Client Server
per week in the London office. About you : You have strong Python backend software engineer skills You have experience working with large data sets You have experience of using PySpark and ideally also Apache Spark You believe in automating wherever possible You're a collaborative problem solver with great communication skills Other technology in the stack includes: FastAPI, Django More ❯
Nottingham, Nottinghamshire, East Midlands, United Kingdom Hybrid/Remote Options
Client Server
per week in the Nottingham office. About you : You have strong Python backend software engineer skills You have experience working with large data sets You have experience of using PySpark and ideally also Apache Spark You believe in automating wherever possible You're a collaborative problem solver with great communication skills Other technology in the stack includes: FastAPI, Django More ❯
Greater Manchester, North West, United Kingdom Hybrid/Remote Options
Searchability (UK) Ltd
Enhanced Maternity & Paternity Charity Volunteer Days Cycle to work scheme And More.. DATA ENGINEER - ESSTENTIAL SKILLS Proven experience building data pipelines using Databricks . Strong understanding of Apache Spark (PySpark or Scala) and Structured Streaming . Experience working with Kafka (MSK) and handling real-time data . Good knowledge of Delta Lake/Delta Live Tables and the Medallion More ❯
S3 Data Lake, and CloudWatch. Strong knowledge of data extraction, transformation, and loading (ETL) processes, leveraging tools such as Talend, Informatica, Matillion, Pentaho, MuleSoft, Boomi, or scripting languages (Python, PySpark, SQL). Solid understanding of data warehousing and data modelling techniques (Star Schema, Snowflake Schema). Familiarity with security frameworks (GDPR, HIPAA, ISO 27001, NIST, SOX, PII) and AWS More ❯
of working Champion DevOps and CI/CD methodologies to ensure agile collaboration and robust data solutions Engineer and orchestrate data models and pipelines Lead development activities using Python, PySpark and other technologies Write high-quality code that contributes to a scalable and maintainable data platform To be successful in this role, you will need to have the following More ❯
Reigate, Surrey, England, United Kingdom Hybrid/Remote Options
esure Group
and influence decisions. Strong understanding of data models and analytics; exposure to predictive modelling and machine learning is a plus. Proficient in SQL and Python, with bonus points for PySpark, SparkSQL, and Git. Skilled in data visualisation with tools such as Tableau or Power BI. Confident writing efficient code and troubleshooting sophisticated queries. Clear and adaptable communicator, able to More ❯
Essential Skills Include: Proven leadership and mentoring experience in senior data engineering roles Expertise in Azure Data Factory, Azure Databricks, and lakehouse architecture Strong programming skills (Python, T-SQL, PySpark) and test-driven development Deep understanding of data security, compliance, and tools like Microsoft Purview Excellent communication and stakeholder management skills Experience with containerisation and orchestration (e.g., Kubernetes, Azure More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Oscar Technology
warehousing techniques, including the Kimball Methodology or other similar dimensional modelling standards, is essential to the role. Technical experience building and deploying models and reports utilizing the following tools: PySpark Microsoft Fabric or Databricks Power BI Git CI/CD pipelines (Azure DevOps experience preferred) An understanding of the structure and purpose of the Financial Advice and Wealth Management More ❯
Experience in a Data Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, Delta Lake Data Warehousing ETL Database Design Python/PySpark Azure Blob Storage Azure Data Factory Desirable: Exposure ML/Machine Learning/AI/Artificial Intelligence More ❯
data models and transformation pipelines using Databricks, Azure, and Power BI to turn complex datasets into reliable, insight-ready assets. You'll apply strong skills in SQL, Python, and PySpark to build efficient ELT workflows and ensure data quality, performance, and governance. Collaboration will be key as you partner with analysts and business teams to align data models with More ❯
have: Hands-on experience creating data pipelines using Azure services such as Synapse, Data Factory or Databricks Commercial experience with Microsoft Fabric Strong understanding of SQL and Python/PySpark Experience with Power BI and data modelling Some of the package/role details include: Salary up to £85,000 Flexible hybrid working model (normally once/twice per More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
have: Hands-on experience creating data pipelines using Azure services such as Synapse, Data Factory or Databricks Commercial experience with Microsoft Fabric Strong understanding of SQL and Python/PySpark Experience with Power BI and data modelling Some of the package/role details include: Salary up to £85,000 Flexible hybrid working model (normally once/twice per More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Vermillion Analytics
Collaborate with brilliant behavioural scientists and product teams who'll challenge them in the best ways The ideal candidate will: Know their way around AWS data tools (Glue/PySpark, Athena) and Microsoft Fabric Be able to write clean Python and SQL in their sleep Have battle scars from integrating CRMs (HubSpot, Salesforce) via APIs Actually care about data More ❯
explain commercial impact. Understanding of ML Ops vs DevOps and broader software engineering standards. Cloud experience (any platform). Previous mentoring experience. Nice to have: Snowflake or Databricks Spark, PySpark, Hadoop or similar big data tooling BI exposure (PowerBI, Tableau, etc.) Interview Process Video call - high-level overview and initial discussion In-person technical presentation - based on a provided More ❯
involve: Supporting the BI reporting team by creating and maintaining data solutions for KPI reporting. Developing scalable, performance-optimised ELT/ETL pipelines using T-SQL, Python, ADO, C#, PySpark and Jupyter Notebooks. Working with the Gazetteer and GIS teams to maintain stable, consolidated database platforms for mapping and GIS systems. Contributing to the development and maintenance of a More ❯
Nottingham, Nottinghamshire, England, United Kingdom
E.ON
the perfect match? Proven experience in a data analytics or credit risk role, ideally within utilities, financial services or other regulated industry Strong coding skills in SQL, Python and PySpark for data extraction, transformation, modeling and forecasting Solid understanding of forecasting techniques, scenario modelling, and regression-based analytics Strong commercial acumen, with the ability to translate complex analytical findings More ❯
to transform raw data into trusted, actionable insights that power critical business decisions. Key Responsibilities Design and implement scalable data pipelines and ETL/ELT workflows in Databricks using PySpark, SQL, and Delta Lake. Architect and manage the Medallion (Bronze, Silver, Gold) data architecture for optimal data organization, transformation, and consumption. Develop and maintain data models, schemas, and data … data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, Delta Lake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design, and data lakehouse concepts. Proficiency More ❯
United Kingdom, Wolstanton, Staffordshire Hybrid/Remote Options
Uniting Ambition
talent in this space. The role The role is building AI applications based on LLM and models such as GPT and BERT You'll make use of Python programming, Pyspark, tensorflow, HuggingFace, LangChain, RAG techniques, interfacing with diverse data sets. Cloud data platforms and a diverse set of tools for AI app deployment. The opportunity Work at the forefront More ❯
Manchester, Lancashire, England, United Kingdom Hybrid/Remote Options
Lorien
a blend of the following: Strong knowledge of AWS data services (Glue, S3, Lambda, Redshift, etc.) Solid understanding of ETL processes and data pipeline management Proficiency in Python and PySpark Experience working with SQL-based platforms Previous involvement in migrating on-premise solutions to cloud is highly desirable Excellent collaboration skills and ability to mentor others The Benefits: Salary More ❯
high-impact systems. Line management or mentoring experience, with a genuine commitment to team growth and wellbeing. Strong hands-on skills in: AWS (or equivalent cloud platforms) Python/PySpark for data engineering and automation TypeScript, Node.js, React.js for full-stack development Solid grasp of distributed systems design, secure coding, and data privacy principles. Familiarity with fraud detection models More ❯
high-impact systems. Line management or mentoring experience, with a genuine commitment to team growth and wellbeing. Strong hands-on skills in: AWS (or equivalent cloud platforms) Python/PySpark for data engineering and automation TypeScript, Node.js, React.js for full-stack development Solid grasp of distributed systems design, secure coding, and data privacy principles. Familiarity with fraud detection models More ❯
Stevenage, Hertfordshire, England, United Kingdom Hybrid/Remote Options
Akkodis
and NoSQL to AWS cloud. Strong knowledge of ETL processes is essential, including experience with tools such as Talend, Informatica, Matillion, Pentaho, MuleSoft, Boomi, or scripting languages like Python, PySpark, and SQL. A solid understanding of data warehousing and modelling techniques, including Star and Snowflake schemas, is required. Ideally you will also have a comprehensive knowledge of AWS glue. More ❯