optimizing scalable data solutions using the Databricks platform. Key Responsibilities: Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain … best practices for data governance, security, and access control within Databricks. Provide technical mentorship and guidance to junior engineers. Must-Have Skills: Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). Proven track record of building and optimizing data pipelines in cloud environments. Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM More ❯
Edinburgh, Midlothian, United Kingdom Hybrid / WFH Options
Aberdeen Group
API-driven architectures. Oversee data governance initiatives including metadata management, data quality, and master data management (MDM). Evaluate and integrate big data technologies and streaming platforms such as Apache Kafka and Apache Spark. Collaborate with cross-functional teams to align data architecture with business goals and technical requirements. About the candidate Exceptional stakeholder engagement, communication, and organisational More ❯
data solutions using the Databricks platform. Key Responsibilities: . Lead the migration of existing AWS-based data pipelines to Databricks. . Design and implement scalable data engineering solutions using Apache Spark on Databricks. . Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. . Optimize performance and cost-efficiency of Databricks workloads. . … for data governance, security, and access control within Databricks. . Provide technical mentorship and guidance to junior engineers. Must-Have Skills: . Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). . Proven track record of building and optimizing data pipelines in cloud environments. . Experience with AWS services such as S3, Glue, Lambda, Step Functions More ❯