to capture Cloud + Databricks needs. Define security, compliance, downtime tolerance, RPO/RTO, SLAs, and cost requirements. Capture data platform requirements across ingestion, transformation, governance, and analytics (Databricks, DeltaLake, Unity Catalog, Workflows). Map service and data dependencies, classify criticality, and align to the Core Cloud capability catalogue. Produce a clear, endorsed baseline of Core Cloud … BA within cloud or data platform programmes (Azure + Databricks ideal). Experience working with AWS tech stack Strong experience gathering technical, data, and platform requirements. Understanding of Databricks (DeltaLake, Unity Catalog, governance, clusters, pipelines). Comfortable engaging technical and non-technical stakeholders; strong documentation skills. Nice to Have: Data platform migration experience; exposure to FinOps; agile More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricksworkflows Implement … tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, DeltaLake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse/SQL Pools Azure Key More ❯
architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricks workflows … tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, DeltaLake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse/SQL Pools Azure Key More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and Apache Spark, including performance tuning and advanced concepts such as DeltaLake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing and CI/CD pipelines Advanced knowledge of SQL and More ❯
Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, DeltaLake, Spark SQL, and related services. Apply best practices for development, deployment, and workload optimization. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from relational databases More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, DeltaLake, Spark SQL, and related services. Apply best practices for development, deployment, and workload optimization. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from relational databases More ❯
sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, DeltaLake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from multiple sources and formats (CSV … JSON, Parquet, Delta) for downstream analytics, dashboards, and reporting. Apply Azure Purview for governance and quality checks. Monitor pipelines, resolve issues, and enhance data quality processes. Work closely with engineers, data scientists, and stakeholders. Participate in code reviews and clearly communicate technical concepts. Develop CI/CD pipelines for deployments and automate data engineering workflows using DevOps principles. Interested More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, DeltaLake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from multiple sources and formats (CSV … JSON, Parquet, Delta) for downstream analytics, dashboards, and reporting. Apply Azure Purview for governance and quality checks. Monitor pipelines, resolve issues, and enhance data quality processes. Work closely with engineers, data scientists, and stakeholders. Participate in code reviews and clearly communicate technical concepts. Develop CI/CD pipelines for deployments and automate data engineering workflows using DevOps principles. Interested More ❯