The increasing sophistication in AI models makes Explainable AI (XAI) extremely relevant for enhancing predictions with explanation capabilities. Recent XAI approaches for tabular, image, and graph data have become popular and widely adopted. However, the multi-dimensional nature of spatio-temporal data collected in sensor networks makes most approaches ineffective in such scenarios. XAI approaches that specifically deal with this type of data are emerging, but the complexity of the extracted explanations leads to multi-axis information for nodes, features, and timesteps that is cumbersome to visualize. Indeed, providing effective visualizations could help bridging the existing gap between XAI model predictions and their fruitful and responsible exploitation in practical domains. In this paper, we address this gap by studying the effectiveness of different visualization techniques with multi-dimensional explanations. We adopt a meta-learning XAI framework that identifies salient factors from multiple analytical views. Then, we present a qualitative and quantitative study comparing the effectiveness of 14 visualization techniques using two real-world sensor network datasets. Our results reveal useful patterns, as well as the merits and pitfalls of each technique, paving the way for future work on this topic.
Visual Analytics for Explainable AI with Spatio-Temporal Data: A Comparative Study
Corizzo R.;Altieri M.;Ceci M.
2025-01-01
Abstract
The increasing sophistication in AI models makes Explainable AI (XAI) extremely relevant for enhancing predictions with explanation capabilities. Recent XAI approaches for tabular, image, and graph data have become popular and widely adopted. However, the multi-dimensional nature of spatio-temporal data collected in sensor networks makes most approaches ineffective in such scenarios. XAI approaches that specifically deal with this type of data are emerging, but the complexity of the extracted explanations leads to multi-axis information for nodes, features, and timesteps that is cumbersome to visualize. Indeed, providing effective visualizations could help bridging the existing gap between XAI model predictions and their fruitful and responsible exploitation in practical domains. In this paper, we address this gap by studying the effectiveness of different visualization techniques with multi-dimensional explanations. We adopt a meta-learning XAI framework that identifies salient factors from multiple analytical views. Then, we present a qualitative and quantitative study comparing the effectiveness of 14 visualization techniques using two real-world sensor network datasets. Our results reveal useful patterns, as well as the merits and pitfalls of each technique, paving the way for future work on this topic.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


