Explainable Artificial Intelligence (XAI) is aimed for enhancing transparency, accountability, and trustworthiness of computer systems, which is especially important for the healthcare domain to respect applicable regulations and build trust among users. However, majority of the state-of-the-art XAI methods are suited for the supervised learning task, whereas in real-world applications, adequate collection of labelled examples is either costly or non-feasible. To overcome this limit and broaden the applicability of XAI to unlabelled samples, we propose a novel approach that exploits the unsupervised prototype-based learning to uncover the hidden structure of data and apply these prototypes for explaining emerging patterns. This primary idea is accomplished, in the proposed workflow, by combining two prototype-based algorithms, namely Non-negative Matrix Factorization, and Fuzzy C-Means, with the Shapley Additive Explanations to derive graphical explanations of the extracted prototypes. The proposed approach is validated for real-life data related to the sensor-based monitoring of mental health.

Prototype-Based Explanations to Improve Understanding of Unsupervised Datasets

Gabriella Casalino
;
Giovanna Castellano;Gianluca Zaza
2024-01-01

Abstract

Explainable Artificial Intelligence (XAI) is aimed for enhancing transparency, accountability, and trustworthiness of computer systems, which is especially important for the healthcare domain to respect applicable regulations and build trust among users. However, majority of the state-of-the-art XAI methods are suited for the supervised learning task, whereas in real-world applications, adequate collection of labelled examples is either costly or non-feasible. To overcome this limit and broaden the applicability of XAI to unlabelled samples, we propose a novel approach that exploits the unsupervised prototype-based learning to uncover the hidden structure of data and apply these prototypes for explaining emerging patterns. This primary idea is accomplished, in the proposed workflow, by combining two prototype-based algorithms, namely Non-negative Matrix Factorization, and Fuzzy C-Means, with the Shapley Additive Explanations to derive graphical explanations of the extracted prototypes. The proposed approach is validated for real-life data related to the sensor-based monitoring of mental health.
2024
979-8-3503-1955-2
File in questo prodotto:
File Dimensione Formato  
Prototype-Based_Explanations_to_Improve_Understanding_of_Unsupervised_Datasets.pdf

non disponibili

Descrizione: Versione editoriale
Tipologia: Documento in Versione Editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 530.98 kB
Formato Adobe PDF
530.98 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/502581
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact