Brentford, Middlesex, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
DataEngineer - Microsoft Fabric - Azure - London - Hybrid - 70k A well-established multinational organisation is undergoing a major data transformation, moving from legacy systems to a modern Microsoft ecosystem built on Azure and Fabric. We're looking for a DataEngineer with hands-on Microsoft Fabric experience … to lead this migration, design scalable architecture, and support analytics, forecasting, and AI enablement. Key Responsibilities: Spearhead the migration to Microsoft Fabric Build robust data pipelines using OneLake, Dataflow and Data Factory Deliver full-stack data engineering solutions What You'll Bring: Proven experience in BI or Data Engineering roles Strong skills … in Microsoft Fabric and AzureData Services Solid knowledge of SQL and Power BI Experience with data migration and modern data platforms What's on Offer: Competitive salary up to £70,000 (DOE) Hybrid working - just 2-4 days per month onsite in London A pivotal role in a high-impact transformation More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
DataEngineer - Azure Databricks/SC Clearance - Contract Active SC Clearance is required for this position Hybrid working - 3 days/week on site required Up to £520/day - Inside IR35 We are currently recruiting for a well experienced DataEngineer, required for a leading global transformation consultancy, based … well experienced in collaborating with cross-functional teams to deliver digital products. Key skills and responsibilities: Design, build, and maintain scalable ETL pipelines to ingest, transform, and load data from diverse sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure … quality and consistency. Utilize Unity Catalog, Delta Lake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL, Python, R, YAML, and JavaScript. Integrate data from multiple sources and formats (CSV, JSON, Parquet, Delta) for downstream analytics, dashboards, and reporting. Apply Azure Purview for governance and quality checks. Monitor pipelines, resolve issues More ❯
Key Responsibilities:SynapseDevelop and maintain data models in Synapse to support reporting and analyticsDevelop ETL solutions using Azure Synapse Analytics Pipelines and dedicated SQL pools for extraction, transformation and aggregation from Salesforce objects.Analyze the source system and design the ETL data load.Ability to create and configure Azure Synapse Analytics workspace.Create pipelines in … Azure Synapse using Linked Services/Datasets/Pipeline to Extract, Transform and Load data from different sources.Perform regular monitoring, troubleshooting, and maintenance of the data architecture and pipelines.Contribute to cloud infrastructure and security (Azure storage, networking, security, cost management)Power BI ToolsCollaborate with data scientists, analysts, and stakeholders to … gather requirements, design visualizations, and provide training to use self-service BI Tools.Salesforce and MuleSoftKnowledge of Salesforce object and data structuresDesign and implement data integrations between Salesforce and Azure Synapse using APIs, ETL tools, and integration platforms like MuleSoft and AzureData Factory/Azure Synapse Analytics.Strong knowledge More ❯
Python DataEngineerAzure & PySpark - SC Cleared Contract £400-£458pd (Inside IR35) SC Clearance is Essential Summary Were looking for a Python DataEngineer skilled in PySpark, Delta Lake, Azure services, containerized development, and Behave-based testing click apply for full job details More ❯
Nottingham, Nottinghamshire, United Kingdom Hybrid/Remote Options
Client Server
Senior DataEngineer (Databricks SQL Azure) Nottingham/WFH to £65k Opportunity to progress your career in a senior, hands-on DataEngineer role at a SaaS tech company. As a Senior DataEngineer you'll join a newly formed team that deals with customer facing … reporting on big data sets, they process 120 billion lines of data per day click apply for full job details More ❯