Despite growing efforts to prioritize user experience in product development, software organizations often perform little or no usability engineering activities. Therefore, it is crucial to develop strategies to integrate them effectively into software development processes. The rapid advances in Artificial Intelligence have significantly influenced various aspects of daily life, particularly with the emergence of Large Language Models (LLMs), which can serve as promising tools to support activities to enhance the usability of software products. This paper presents a study investigating the potential of LLMs to assist practitioners in conducting usability tests. Specifically, we conducted an experiment where LLMs generate usability test tasks. Our goal is to assess whether AI can effectively support evaluators by comparing tasks generated by LLMs to those defined by usability experts. The findings indicate that while LLMs can provide valuable support, effective usability testing still requires human oversight and expert intervention.

Leveraging Large Language Models for Usability Testing: a Preliminary Study

Calvano M.;Lanzilotti R.;Piccinno A.;Ragone A.
2025-01-01

Abstract

Despite growing efforts to prioritize user experience in product development, software organizations often perform little or no usability engineering activities. Therefore, it is crucial to develop strategies to integrate them effectively into software development processes. The rapid advances in Artificial Intelligence have significantly influenced various aspects of daily life, particularly with the emergence of Large Language Models (LLMs), which can serve as promising tools to support activities to enhance the usability of software products. This paper presents a study investigating the potential of LLMs to assist practitioners in conducting usability tests. Specifically, we conducted an experiment where LLMs generate usability test tasks. Our goal is to assess whether AI can effectively support evaluators by comparing tasks generated by LLMs to those defined by usability experts. The findings indicate that while LLMs can provide valuable support, effective usability testing still requires human oversight and expert intervention.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/552085
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact