published data models and reports. Experience required: Strong background in data engineering, warehousing, and data quality. Proficiency in Microsoft 365, Power BI, and other BI tools Familiarity with Azure Databricks and Delta Lake is desirable. Ability to work autonomously in a dynamic environment and contribute to team performance. Strong communication, influencing skills, and a positive, can-do attitude. Knowledge of More ❯
and solutions in highly complex data environments with large data volumes. SQL/PLSQL experience with the ability to write ad-hoc and complex queries to perform data analysis. Databricks experience is essential. Experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will be able to develop solutions in a … hybrid data environment (on-Prem and Cloud). You must be able to collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, Azure, AWS, etc. Hands on experience with developing data pipelines for structured, semi-structured, and unstructured data and experience integrating with their supporting stores (e.g. RDBMS, NoSQL DBs, Document DBs, Log Files etc). Please apply ASAP More ❯
and solutions in highly complex data environments with large data volumes. SQL/PLSQL experience with the ability to write ad-hoc and complex queries to perform data analysis. Databricks experience is essential. Experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will be able to develop solutions in a … hybrid data environment (on-Prem and Cloud). You must be able to collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, Azure, AWS, etc. Hands on experience with developing data pipelines for structured, semi-structured, and unstructured data and experience integrating with their supporting stores (e.g. RDBMS, NoSQL DBs, Document DBs, Log Files etc). Please apply ASAP More ❯
to work through an umbrella company for the duration of the contract. Responsibilities will include collecting and analysing requirements, analysing data and building data pipelines. Strong experience with Python & Databricks is essential and also experience of working in Unix environments. You must also have strong experience of SQL and relational databases. You will have extensive data modelling experience and experience More ❯
on a greenfield data transformation programme. Their current processes offer limited digital customer interaction, and the vision is to modernise these processes by: Building a modern data platform in Databricks Creating a single customer view across the organisation. Enabling new client-facing digital services through real-time and batch data pipelines. You will join a growing team of engineers and … a high-value greenfield initiative for the business, directly impacting customer experience and long-term data strategy. Key Responsibilities: Design and build scalable data pipelines and transformation logic in Databricks Implement and maintain Delta Lake physical models and relational data models. Contribute to design and coding standards, working closely with architects. Develop and maintain Python packages and libraries to support … data model components. Participate in Agile ceremonies (stand-ups, backlog refinement, etc.). Essential Skills: PySpark and SparkSQL. Strong knowledge of relational database modelling Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes). Azure platform experience. ADF or Synapse pipelines for orchestration. Python development Familiarity with CI/CD and DevOps principles. Desirable Skills Data Vault 2.0. Data More ❯
on a greenfield data transformation programme. Their current processes offer limited digital customer interaction, and the vision is to modernise these processes by: Building a modern data platform in Databricks Creating a single customer view across the organisation. Enabling new client-facing digital services through real-time and batch data pipelines. You will join a growing team of engineers and … a high-value greenfield initiative for the business, directly impacting customer experience and long-term data strategy. Key Responsibilities: Design and build scalable data pipelines and transformation logic in Databricks Implement and maintain Delta Lake physical models and relational data models. Contribute to design and coding standards, working closely with architects. Develop and maintain Python packages and libraries to support … data model components. Participate in Agile ceremonies (stand-ups, backlog refinement, etc.). Essential Skills: PySpark and SparkSQL. Strong knowledge of relational database modelling Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes). Azure platform experience. ADF or Synapse pipelines for orchestration. Python development Familiarity with CI/CD and DevOps principles. Desirable Skills Data Vault 2.0. Data More ❯