Model-based approaches to recommendation have proven to be very accurate. Unfortunately, exploiting a latent space we miss references to the actual semantics of recommended items. In this extended abstract, we show how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpretable model. Finally, we introduce and evaluate the semantic accuracy and robustness for the knowledge-aware interpretability of the model.
Semantic interpretability of latent factors for recommendation
Di Noia T.;Di Sciascio E.;Ragone Azzurra
2019-01-01
Abstract
Model-based approaches to recommendation have proven to be very accurate. Unfortunately, exploiting a latent space we miss references to the actual semantics of recommended items. In this extended abstract, we show how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpretable model. Finally, we introduce and evaluate the semantic accuracy and robustness for the knowledge-aware interpretability of the model.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.