London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
reviews and continuous improvement initiatives Essential Skills for the AWS Data Engineer: Extensive hands-on experience with AWS data services Strong programming skills in Python (including libraries such as PySpark or Pandas) Solid understanding of data modelling, warehousing and architecture design within cloud environments Experience building and managing ETL/ELT workflows and data pipelines at scale Proficiency with More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
from APIs, databases, and financial data sources into Azure Databricks. Optimize pipelines for performance, reliability, and cost, incorporating data quality checks. Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, Delta Lake, Spark SQL, and related services. Apply best More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
Analytics, Databricks, SQL Database, and Azure Storage. Excellent SQL and data modelling (star/snowflake, dimensional modelling). Knowledge of Power BI dataflows, DAX, and RLS. Experience with Python, PySpark, or T-SQL for transformations. Understanding of CI/CD and DevOps (Git, YAML pipelines). Strong grasp of data governance, security, and performance tuning. To apply for this More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
build, and maintain scalable ETL pipelines to ingest, transform, and load data from diverse sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, Delta Lake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL More ❯
experience as an Azure Data Engineer in enterprise environments. Strong hands-on expertise with Azure Data Factory , Databricks , Synapse , and Azure Data Lake . Proficiency in SQL , Python , and PySpark . Experience with data modelling , ETL optimisation , and cloud migration projects . Familiarity with Agile delivery and CI/CD pipelines. Excellent communication skills for working with technical and More ❯
data governance, security, and access control within Databricks. Provide technical mentorship and guidance to junior engineers. Must-Have Skills: Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). Proven track record of building and optimizing data pipelines in cloud environments. Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM, and VPC. Proficiency More ❯
Bristol, Avon, South West, United Kingdom Hybrid/Remote Options
IO Associates
ETL/ELT development and orchestration tools (DBT, Airflow, or similar). Hands-on experience with cloud platforms (AWS, Azure, or GCP). Strong knowledge of SQL, Python, and PySpark for data processing. Familiarity with CI/CD pipelines and DevOps practices for data solutions. This role is perfect for someone who thrives on solving complex data challenges and More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Hays Specialist Recruitment Limited
as a Data Engineer with Active Security Clearance (SC) Strong Python skills with modular, test-driven design Experience with Behave for unit and BDD testing (mocking, patching) Proficiency in PySpark and distributed Data processing Solid understanding of Delta Lake (design and maintenance) Hands-on with Docker for development and deployment Familiarity with Azure services: Functions, Key Vault, Blob Storage More ❯
WC2H 0AA, Leicester Square, Greater London, United Kingdom Hybrid/Remote Options
Youngs Employment Services
from a "fail fast" approach to a more stable and controlled iteration management process. To be considered for the post you'll need all the essential criteria Essential SQL Pyspark/Python >6 months of practical Fabric experience in an Enterprise setting Power BI/Fabric Semantic Models Ability to work with/alongside stakeholders with their own operational More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
requirements.* Ensure best practices in data governance, security, and compliance. Key Skills & Experience* Proven experience as an Azure Data Engineer.* Strong hands-on expertise with Databricks - 5+ years experience (PySpark, notebooks, clusters, Delta Lake).* Solid knowledge of Azure services (Data Lake, Synapse, Data Factory, Event Hub).* Experience working with DevOps teams and CI/CD pipelines.* Ability More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricksworkflows … using tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, Delta Lake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse/SQL Pools Azure Key Vault More ❯
company for the duration of the contract. You must have several years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will also have a number of years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines. ETL process expertise is essential. More ❯
Job Title: Data Quality Engineer Work Location: Cardiff, UK (Twice a month) The Role: Data Quality Engineer Responsibilities: As part of a multi-discipline team challenged with building a cloud data platform, you will be responsible for ensuring the quality More ❯
ML systems, deploying LLMs, and operationalizing models in production. Key Responsibilities: Design, develop, and deploy ML, Deep Learning, and LLM solutions. Implement scalable ML and data pipelines in Databricks (PySpark, Delta Lake, MLflow). Build automated MLOps pipelines with model tracking, CI/CD, and registry. Deploy and operationalize LLMs , including fine-tuning, prompt optimization, and monitoring. Architect secure … . Mentor engineers, enforce best practices, and lead design/architecture reviews. Required Skills & Experience: 5+ years in ML/AI solution development. Recent hands-on experience with Databricks, PySpark, Delta Lake, MLflow . Experience with LLMs (Hugging Face, LangChain, Azure OpenAI) . Strong MLOps, CI/CD, and model monitoring experience. Proficiency in Python, PyTorch/TensorFlow, FastAPI More ❯
and cluster tuning to ensure cost-efficient, high-performing workloads. Implement data quality, lineage tracking, and access control policies aligned with Databricks Unity Catalogue and governance best practices. Develop PySpark applications for ETL, data transformation, and analytics, following modular and reusable design principles. Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel for versioned … data management. Integrate Databricks solutions with Azure services such as Azure Data Lake Storage, Key Vault, and Azure Functions. What We're Looking For Proven experience with Databricks, PySpark, and Delta Lake. Strong understanding of workflow orchestration, performance optimisation, and data governance. Hands-on experience with Azure cloud services. Ability to work in a fast-paced environment and deliver More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Hays Specialist Recruitment Limited
Data Engineer - Active SC, Databricks, PySpark Up to £475 per day (Inside IR35) Remote/London 6 months My client is an International Consultancy who are recruiting for an Data Engineer with Active Security Clearance (SC) and strong Databricks and Azure experience to deliver and optimise Data engineering solutions. Key requirements: Proven experience as a Data Engineer with Active … Security Clearance (SC) Strong experience with Databricks, PySpark and Delta Lake Expertise in Jobs & Workflows, cluster tuning, and performance optimisation Solid understanding of Data governance (Unity Catalog, Lineage, Access Policies) Hands-on with Azure services: Data Lake Storage (Gen2), Key Vault, Azure Functions Familiarity with CI/CD for Databricks deployments Strong troubleshooting in distributed Data environments Excellent communication More ❯
About the RoleWe are looking for a Python Data Engineer with strong hands-on experience in Behave-based unit testing, PySpark development, Delta Lake optimisation, and Azure cloud services. This role focusses on designing and deploying scalable data processing solutions in a containerised environment, emphasising maintainable, configurable, and test-driven code delivery.Key Responsibilities Develop and maintain data ingestion, transformation … Azure Functions for serverless transformation logic Azure Key Vault for secure credential management Azure Blob Storage for data lake operations What We're Looking For Proven experience in Python, PySpark, and Delta Lake. SC Cleared Strong knowledge of Behave for test-driven development. Experience with Docker and containerised deployments. Familiarity with Azure cloud services and data engineering best practices. More ❯