Adaptive and personalized systems have become pervasive technologies, gradually playing an increasingly important role in our daily lives. Indeed, we are now accustomed to interacting with algorithms that leverage the power of Language Models (LLMs) to assist us in various scenarios, from services suggesting music or movies to personal assistants proactively supporting us in complex decisionmaking tasks. As these technologies continue to shape our everyday experiences, ensuring that the internal mechanisms guiding these algorithms are transparent and comprehensible becomes imperative. The EU General Data Protection Regulation (GDPR) recognizes the users’ right to explanation when confronted with intelligent systems, highlighting the significance of this aspect. Regrettably, current research often prioritizes the maximization of personalization strategy effectiveness, such as recommendation accuracy, at the expense of model explainability. To address this concern, the workshop aims to provide a platform for in-depth discussions on challenges, problems, and innovative research approaches in the field. The workshop specifically focuses on investigating the role of transparency and explainability in recent methodologies for constructing user models and developing personalized and adaptive systems.

ExUM 2024 - 6th Workshop on Explainable User Modeling and Personalised Systems

Marco Polignano
Conceptualization
;
Cataldo Musto
Methodology
;
Giovanni Semeraro
Validation
;
2024-01-01

Abstract

Adaptive and personalized systems have become pervasive technologies, gradually playing an increasingly important role in our daily lives. Indeed, we are now accustomed to interacting with algorithms that leverage the power of Language Models (LLMs) to assist us in various scenarios, from services suggesting music or movies to personal assistants proactively supporting us in complex decisionmaking tasks. As these technologies continue to shape our everyday experiences, ensuring that the internal mechanisms guiding these algorithms are transparent and comprehensible becomes imperative. The EU General Data Protection Regulation (GDPR) recognizes the users’ right to explanation when confronted with intelligent systems, highlighting the significance of this aspect. Regrettably, current research often prioritizes the maximization of personalization strategy effectiveness, such as recommendation accuracy, at the expense of model explainability. To address this concern, the workshop aims to provide a platform for in-depth discussions on challenges, problems, and innovative research approaches in the field. The workshop specifically focuses on investigating the role of transparency and explainability in recent methodologies for constructing user models and developing personalized and adaptive systems.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/504000
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact