London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
reviews and continuous improvement initiatives Essential Skills for the AWS Data Engineer: Extensive hands-on experience with AWS data services Strong programming skills in Python (including libraries such as PySpark or Pandas) Solid understanding of data modelling, warehousing and architecture design within cloud environments Experience building and managing ETL/ELT workflows and data pipelines at scale Proficiency with More ❯
Central London, London, United Kingdom Hybrid/Remote Options
McCabe & Barton
Storage . Implement governance and security measures across the platform. Leverage Terraform or similar IaC tools for controlled and reproducible deployments. Databricks Development Develop and optimise data jobs using PySpark or Scala within Databricks. Implement the medallion architecture (bronze, silver, gold layers) and use Delta Lake for reliable data transactions. Manage cluster configurations and CI/CD pipelines for More ❯
from APIs, databases, and financial data sources into Azure Databricks. Optimize pipelines for performance, reliability, and cost, incorporating data quality checks. Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, Delta Lake, Spark SQL, and related services. Apply best More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
from APIs, databases, and financial data sources into Azure Databricks. Optimize pipelines for performance, reliability, and cost, incorporating data quality checks. Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, Delta Lake, Spark SQL, and related services. Apply best More ❯
build, and maintain scalable ETL pipelines to ingest, transform, and load data from diverse sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, Delta Lake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
build, and maintain scalable ETL pipelines to ingest, transform, and load data from diverse sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, Delta Lake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Syntax Consultancy Limited
modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred -eg More ❯
london, south east england, united kingdom Hybrid/Remote Options
UBDS Group
and monitoring. Data modelling expertise to develop low-level designs and implement models against business requirements, using design patterns such as Inmon, Kimball and Data Vault. Excellent Databricks, Python, PySpark and Spark SQL knowledge, including writing, testing and quality assuring code and knowledge of Unity Catalog best practice to govern data assets as well as Azure and/or More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Oscar Technology
warehousing techniques, including the Kimball Methodology or other similar dimensional modelling standards, is essential to the role. Technical experience building and deploying models and reports utilizing the following tools: PySpark Microsoft Fabric or Databricks Power BI Git CI/CD pipelines (Azure DevOps experience preferred) An understanding of the structure and purpose of the Financial Advice and Wealth Management More ❯
have: Hands-on experience creating data pipelines using Azure services such as Synapse, Data Factory or Databricks Commercial experience with Microsoft Fabric Strong understanding of SQL and Python/PySpark Experience with Power BI and data modelling Some of the package/role details include: Salary up to £85,000 Flexible hybrid working model (normally once/twice per More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
have: Hands-on experience creating data pipelines using Azure services such as Synapse, Data Factory or Databricks Commercial experience with Microsoft Fabric Strong understanding of SQL and Python/PySpark Experience with Power BI and data modelling Some of the package/role details include: Salary up to £85,000 Flexible hybrid working model (normally once/twice per More ❯
met. The role involves structuring analytical solutions that address business objectives and problem solving. We are looking for hands-on experience in writing code for AWS Glue in Python, PySpark, and Spark SQL. The successful candidate will translate stated or implied client needs into researchable hypotheses, facilitate client working sessions, and be involved in recurring project status meetings. You More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Step 2 Recruitment LTD
PowerPoint presentations/reports and presenting to clients or colleagues Industry experience in the retail banking or wider financial services sector Additional technical experience in any of the following – PySpark, Microsoft Azure, VBA, HTML/CSS, JavaScript, JQuery, SQL, PHP, Power Automate, Power BI What we offer A highly competitive salary A genuinely compelling profit share scheme, with the More ❯
from a "fail fast" approach to a more stable and controlled iteration management process. To be considered for the post you'll need all the essential criteria Essential SQL Pyspark/Python Power BI/Fabric Semantic Models Ability to work with/alongside stakeholders with their own operational pressures Able to follow best practices and adapt, even without More ❯
WC2H 0AA, Leicester Square, Greater London, United Kingdom Hybrid/Remote Options
Youngs Employment Services
from a "fail fast" approach to a more stable and controlled iteration management process. To be considered for the post you'll need all the essential criteria Essential SQL Pyspark/Python >6 months of practical Fabric experience in an Enterprise setting Power BI/Fabric Semantic Models Ability to work with/alongside stakeholders with their own operational More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
requirements.* Ensure best practices in data governance, security, and compliance. Key Skills & Experience* Proven experience as an Azure Data Engineer.* Strong hands-on expertise with Databricks - 5+ years experience (PySpark, notebooks, clusters, Delta Lake).* Solid knowledge of Azure services (Data Lake, Synapse, Data Factory, Event Hub).* Experience working with DevOps teams and CI/CD pipelines.* Ability More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricksworkflows … using tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, Delta Lake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse/SQL Pools Azure Key Vault More ❯
and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricks … using tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, Delta Lake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse/SQL Pools Azure Key Vault More ❯
City, London, United Kingdom Hybrid/Remote Options
Client Server
Python Software Engineer/Developer (PythonPySpark Azure) London/WFH to £100k Are you a data centric Software Engineer with strong Python coding skills? You could be progressing your career in a senior, hands-on role at a scaling, global technical services company as they look to expand their product offerings with a new SaaS data analytics platform More ❯
City of London, London, United Kingdom Hybrid/Remote Options
83zero Limited
controls. AI & Technology Enablement Build tools and processes for metadata management, data quality, and data sharing. Leverage AI and automation tools to improve data governance capabilities. Use Python, SQL, PySpark, Power BI, and related tools for data processing and visualisation. Strategy & Stakeholder Engagement Provide subject matter expertise in data governance and AI governance. Collaborate with business, data, and tech More ❯
London, Cordwainer, United Kingdom Hybrid/Remote Options
83zero Ltd
controls. AI & Technology Enablement Build tools and processes for metadata management, data quality, and data sharing. Leverage AI and automation tools to improve data governance capabilities. Use Python, SQL, PySpark, Power BI, and related tools for data processing and visualisation. Strategy & Stakeholder Engagement Provide subject matter expertise in data governance and AI governance. Collaborate with business, data, and tech More ❯
Employment Type: Permanent
Salary: £80000 - £90000/annum Bonus, Pension, PH, LA
City of London, London, United Kingdom Hybrid/Remote Options
Harvey Nash
money laundering, and financial crime across global platforms. The role includes direct line management of 5 engineers. We are looking for: Strong Full Stack development skills on Python and PySpark, Typescript (ideally also Node.JS/React.JS) and AWS (Or other cloud provider) as a Technical Lead or Senior Engineer. Line Management experience. The client are looking to offer up More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Client Server
Python Software Engineer/Developer (PythonPySpark Azure) London/WFH to £100k Are you a data centric Software Engineer with strong Python coding skills? You could be progressing your career in a senior, hands-on role at a scaling, global technical services company as they look to expand their product offerings with a new SaaS data analytics platform. … per week in the London office. About you : You have strong Python backend software engineer skills You have experience working with large data sets You have experience of using PySpark and ideally also Apache Spark You believe in automating wherever possible You're a collaborative problem solver with great communication skills Other technology in the stack includes: FastAPI, Django More ❯
office) 25 days annual leave plus bank holidays Performance-related bonus Private medical care And many more Role and Responsibilities Develop and maintain AWS-based data pipelines using Python, PySpark, Spark SQL, AWS Glue, Step Functions, Lambda, EMR, and Redshift. Design, implement, and optimise data architecture for scalability, performance, and security. Work closely with business and technical stakeholders to … Contribute to planning, progress reporting, and delivery of project milestones. Engage in client workshops, gather feedback, and provide technical guidance. Required Skills & Experience Strong hands-on experience in Python, PySpark, and Spark SQL. Proven expertise in AWS Glue, Step Functions, Lambda, EMR, and Redshift. Solid understanding of cloud architecture, security, and scalability best practices. Experience designing and implementing CI More ❯
and creating scalable AI workflows that bring real business impact. What you'll do: Deploy and customise a powerful data platform for global clients Build and optimise pipelines using PySpark, Python, and SQL Design scalable AI workflows with tools like Palantir Collaborate with client teams to deliver data-driven outcomes What we're looking for: 2-4 years' experience … in data engineering or analytics Hands-on with PySpark, Python, and SQL A proactive problem-solver who thrives in a fast-moving startup Excellent communication and stakeholder skills Why join: £50,000-£75,000 + share options Hybrid working (2-3 days/week in Soho) Highly social, collaborative culture with regular events Work alongside top industry leaders shaping More ❯