In this article we propose a framework that generates natural language explanations supporting the suggestions generated by a recommendation algorithm. The cornerstone of our approach is the usage of Linked Open Data (LOD) for explanation aims. Indeed, the descriptive properties freely available in the LOD cloud (e.g., the author of a book or the director of a movie) can be used to build a graph that connects the recommendations the user received to the items she previously liked via the properties extracted from the LOD cloud. In a nutshell, our approach is based on the insight that properties describing the items the user previously liked as well as the suggestions she received can be effectively used to explain the recommendations. Such a framework is both algorithm-independent and domain-independent, thus it can generate a natural language explanation for every kind of recommendation algorithm, and it can be used to explain a single recommendation (Top-1 scenario) as well as a group of recommendations (Top-N scenario). It is worth noting that the algorithm-independent characteristic does not mean that the framework is able to explain to the user how the recommendations have been generated and how the recommendation algorithm works. The framework explains to users why they might like the recommended items, independently from the recommendation algorithm that generated the recommendations. In the experimental evaluation, we carried out a user study (N = 680) aiming to investigate the effectiveness of our framework in three different domains, as movies, books and music. Results showed that our technique leads to transparent explanations for all the domains, and such explanations resulted independent of the specific recommendation algorithm in most of the experimental settings. Moreover, we also showed the goodness of our strategy when an entire group of recommendations has to be explained. As a case study, we integrated the framework in a real-world application, a conversational recommender system implemented as a Telegram Bot. The idea is to use the explanation for supporting both the training phase (when the user expresses her preferences) and the recommendation step (when the user receives the recommendations). Interesting outcomes emerge from these preliminary experiments.

Linked open data-based explanations for transparent recommender systems

Musto, Cataldo;Lops, Pasquale;de Gemmis, Marco;Semeraro, Giovanni
2019-01-01

Abstract

In this article we propose a framework that generates natural language explanations supporting the suggestions generated by a recommendation algorithm. The cornerstone of our approach is the usage of Linked Open Data (LOD) for explanation aims. Indeed, the descriptive properties freely available in the LOD cloud (e.g., the author of a book or the director of a movie) can be used to build a graph that connects the recommendations the user received to the items she previously liked via the properties extracted from the LOD cloud. In a nutshell, our approach is based on the insight that properties describing the items the user previously liked as well as the suggestions she received can be effectively used to explain the recommendations. Such a framework is both algorithm-independent and domain-independent, thus it can generate a natural language explanation for every kind of recommendation algorithm, and it can be used to explain a single recommendation (Top-1 scenario) as well as a group of recommendations (Top-N scenario). It is worth noting that the algorithm-independent characteristic does not mean that the framework is able to explain to the user how the recommendations have been generated and how the recommendation algorithm works. The framework explains to users why they might like the recommended items, independently from the recommendation algorithm that generated the recommendations. In the experimental evaluation, we carried out a user study (N = 680) aiming to investigate the effectiveness of our framework in three different domains, as movies, books and music. Results showed that our technique leads to transparent explanations for all the domains, and such explanations resulted independent of the specific recommendation algorithm in most of the experimental settings. Moreover, we also showed the goodness of our strategy when an entire group of recommendations has to be explained. As a case study, we integrated the framework in a real-world application, a conversational recommender system implemented as a Telegram Bot. The idea is to use the explanation for supporting both the training phase (when the user expresses her preferences) and the recommendation step (when the user receives the recommendations). Interesting outcomes emerge from these preliminary experiments.
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S1071581918300946-main.pdf

non disponibili

Tipologia: Documento in Versione Editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 2.92 MB
Formato Adobe PDF
2.92 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
1-s2.0-S1071581918300946-main(accepted manuscript).pdf

accesso aperto

Tipologia: Documento in Pre-print
Licenza: Creative commons
Dimensione 2.66 MB
Formato Adobe PDF
2.66 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/250175
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 57
  • ???jsp.display-item.citation.isi??? 44
social impact