data governance, security, and access control within Databricks. • Provide technical mentorship and guidance to junior engineers. Must-Have Skills: • Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). • Proven track record of building and optimizing data pipelines in cloud environments. • Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM, and VPC. • Proficiency More ❯
hands-on approach to data science, analytics, and ML solutions. Continuously optimise data workflows for performance, reliability, and scalability. What youll need: Proven hands-on experience with Databricks, Python, PySpark, and SQL. Machine learning experience in a cloud environment (AWS, Azure, or GCP). Strong understanding of ML libraries such as scikit-learn, TensorFlow, or MLflow. Solid background in More ❯
Edinburgh, Roxburgh's Court, City of Edinburgh, United Kingdom
Bright Purple
on approach to data science, analytics, and ML solutions. • Continuously optimise data workflows for performance, reliability, and scalability. What you’ll need: • Proven hands-on experience with Databricks, Python, PySpark, and SQL. • Machine learning experience in a cloud environment (AWS, Azure, or GCP). • Strong understanding of ML libraries such as scikit-learn, TensorFlow, or MLflow. • Solid background in More ❯
based modelling workflows and PR reviews via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, Delta Lake, metadata-driven pipelines. Unity Catalog/Purview experience for lineage and governance. RLS/OLS implementation experience. More ❯
based modelling workflows and PR reviews via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, Delta Lake, metadata-driven pipelines. Unity Catalog/Purview experience for lineage and governance. RLS/OLS implementation experience. More ❯
profiling, ingestion, collation and storage of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySql). Quality engineering professionals utilise Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a Data Engineering Manager, you will More ❯
a best-in-class Lakehouse from scratch-this is the one. What You'll Be Doing Lakehouse Engineering (Azure + Databricks) Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines , PySpark , and Spark SQL across a full Medallion Architecture (Bronze - Silver - Gold) . Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using … growing data function. Tech Stack You'll Work With Databricks : Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses Azure : ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints Languages : PySpark, Spark SQL, Python, Git DevOps : Azure DevOps Repos, Pipelines, CI/CD Analytics : Power BI, Fabric What We're Looking For Experience Significant commercial experience of Data Engineering with … years delivering production workloads on Azure + Databricks . Strong PySpark/Spark SQL and distributed data processing expertise. Proven Medallion/Lakehouse delivery experience using Delta Lake . Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies. Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills. Mindset Strong grounding in secure Azure Landing More ❯
a best-in-class Lakehouse from scratch-this is the one. ? What You'll Be Doing Lakehouse Engineering (Azure + Databricks) Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines , PySpark , and Spark SQL across a full Medallion Architecture (Bronze ? Silver ? Gold) . Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using … growing data function. ? Tech Stack You'll Work With Databricks : Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses Azure : ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints Languages : PySpark, Spark SQL, Python, Git DevOps : Azure DevOps Repos, Pipelines, CI/CD Analytics : Power BI, Fabric ? What We're Looking For Experience 5-8+ years of Data Engineering … with 2-3+ years delivering production workloads on Azure + Databricks . Strong PySpark/Spark SQL and distributed data processing expertise. Proven Medallion/Lakehouse delivery experience using Delta Lake . Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies. Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills. Mindset Strong grounding More ❯
a best-in-class Lakehouse from scratch-this is the one. What You'll Be Doing Lakehouse Engineering (Azure + Databricks) Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines , PySpark , and Spark SQL across a full Medallion Architecture (Bronze Silver Gold) . Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using … growing data function. Tech Stack You'll Work With Databricks : Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses Azure : ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints Languages : PySpark, Spark SQL, Python, Git DevOps : Azure DevOps Repos, Pipelines, CI/CD Analytics : Power BI, Fabric What We're Looking For Experience 5-8+ years of Data Engineering … with 2-3+ years delivering production workloads on Azure + Databricks . Strong PySpark/Spark SQL and distributed data processing expertise. Proven Medallion/Lakehouse delivery experience using Delta Lake . Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies. Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills. Mindset Strong grounding More ❯
Aberdeen, Aberdeenshire, Scotland, United Kingdom Hybrid/Remote Options
Reed
Reed Technology is delighted to be partnering with an innovative, cutting-edge company based in Aberdeen, currently seeking to permanently onboard a Python Developer. The successful candidate will join a dynamic product development team, contributing to the full software development More ❯