Explainable Artificial Intelligence (XAI) is aimed for enhancing transparency, accountability, and trustworthiness of computer systems, which is especially important for the healthcare domain to respect applicable regulations and build trust among users. However, majority of the state-of-the-art XAI methods are suited for the supervised learning task, whereas in real-world applications, adequate collection of labelled examples is either costly or non-feasible. To overcome this limit and broaden the applicability of XAI to unlabelled samples, we propose a novel approach that exploits the unsupervised prototype-based learning to uncover the hidden structure of data and apply these prototypes for explaining emerging patterns. This primary idea is accomplished, in the proposed workflow, by combining two prototype-based algorithms, namely Non-negative Matrix Factorization, and Fuzzy C-Means, with the Shapley Additive Explanations to derive graphical explanations of the extracted prototypes. The proposed approach is validated for real-life data related to the sensor-based monitoring of mental health.

Prototype-Based Explanations to Improve Understanding of Unsupervised Datasets

Gabriella Casalino
;
Giovanna Castellano;Gianluca Zaza
2024-01-01

Abstract

Explainable Artificial Intelligence (XAI) is aimed for enhancing transparency, accountability, and trustworthiness of computer systems, which is especially important for the healthcare domain to respect applicable regulations and build trust among users. However, majority of the state-of-the-art XAI methods are suited for the supervised learning task, whereas in real-world applications, adequate collection of labelled examples is either costly or non-feasible. To overcome this limit and broaden the applicability of XAI to unlabelled samples, we propose a novel approach that exploits the unsupervised prototype-based learning to uncover the hidden structure of data and apply these prototypes for explaining emerging patterns. This primary idea is accomplished, in the proposed workflow, by combining two prototype-based algorithms, namely Non-negative Matrix Factorization, and Fuzzy C-Means, with the Shapley Additive Explanations to derive graphical explanations of the extracted prototypes. The proposed approach is validated for real-life data related to the sensor-based monitoring of mental health.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/502581
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact