data engineering or a related field, with a focus on building scalable data systems and platforms. Expertise in modern data tools and frameworks such as Spark, dbt, Airflow, Kafka, Databricks, and cloud-native services (AWS, GCP, or Azure) Understanding of data modeling, distributed systems, ETL/ELT pipelines, and streaming architectures Proficiency in SQL and at least one programming language More ❯
Experience with go-to-market activities, RFI/RFP responses, and proposal materials. TOGAF or equivalent certification; cloud certifications (AWS/Azure); data & AI tooling (e.g., Azure Fabric, Snowflake, Databricks). Experience with Data Mesh and data governance platforms (e.g., Collibra, Informatica). Ability to develop a common language for data, data models, data dictionaries, vocabularies, taxonomies, or ontologies. Experience More ❯
as modern data platforms, data product engineering, data marketplace architecture, data developer portals, platform engineering. Experience co-selling partner solutions with hyperscalers or platforms (e.g. AWS, Azure, GCP, Snowflake, Databricks). Outstanding communication skills - able to translate complex ideas for both technical and business audiences. Demonstrated thought leadership in AI/ML such as speaking at industry events, contributing to More ❯
on a greenfield data transformation programme. Their current processes offer limited digital customer interaction, and the vision is to modernise these processes by: Building a modern data platform in Databricks Creating a single customer view across the organisation. Enabling new client-facing digital services through real-time and batch data pipelines. You will join a growing team of engineers and … a high-value greenfield initiative for the business, directly impacting customer experience and long-term data strategy. Key Responsibilities: Design and build scalable data pipelines and transformation logic in Databricks Implement and maintain Delta Lake physical models and relational data models. Contribute to design and coding standards, working closely with architects. Develop and maintain Python packages and libraries to support … data model components. Participate in Agile ceremonies (stand-ups, backlog refinement, etc.). Essential Skills: PySpark and SparkSQL. Strong knowledge of relational database modelling Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes). Azure platform experience. ADF or Synapse pipelines for orchestration. Python development Familiarity with CI/CD and DevOps principles. Desirable Skills Data Vault 2.0. Data More ❯