Model-based approaches to recommendation have proven to be very accurate. Unfortunately, exploiting a latent space we miss references to the actual semantics of recommended items. In this extended abstract, we show how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpretable model. Finally, we introduce and evaluate the semantic accuracy and robustness for the knowledge-aware interpretability of the model.

Semantic interpretability of latent factors for recommendation

Di Noia T.;Di Sciascio E.;Ragone Azzurra
2019-01-01

Abstract

Model-based approaches to recommendation have proven to be very accurate. Unfortunately, exploiting a latent space we miss references to the actual semantics of recommended items. In this extended abstract, we show how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpretable model. Finally, we introduce and evaluate the semantic accuracy and robustness for the knowledge-aware interpretability of the model.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/401543
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact