Artificial Intelligence (AI) systems are becoming widespread in all aspects of society, bringing benefits to the whole economy. There is a growing understanding of the potential benefits and risks of this type of technology. While the benefits are more efficient decision processes and industrial productivity, the risks may include a potential progressive disengagement of human beings in crucial aspects of decision-making. In this respect, a new perspective is emerging that aims at reconsidering the centrality of human beings while reaping the benefits of AI systems to augment rather than replace professional skills: Human-Centred AI (HCAI) is a novel framework that posits that high levels of human control do not contradict high levels of computer automation. In this paper, we investigate the two antipodes, automation vs augmentation, in the context of website usability evaluation. Specifically, we have analyzed whether the level of automation provided by a tool for semi-automatic usability evaluation can support evaluators in identifying usability problems. Three different visualizations, each one corresponding to a different level of automation, ranging from a full-automation approach to an augmentation approach, were compared in an experimental study. We found that a fully automated approach could help evaluators detect a significant number of medium and high-severity usability problems, which are the most critical in a software system; however, it also emerged that it was possible to detect more low-severity usability problems using one of the augmented approaches proposed in this paper.

The fine line between automation and augmentation in website usability evaluation

Andrea Esposito
;
Giuseppe Desolda;Rosa Lanzilotti
2024-01-01

Abstract

Artificial Intelligence (AI) systems are becoming widespread in all aspects of society, bringing benefits to the whole economy. There is a growing understanding of the potential benefits and risks of this type of technology. While the benefits are more efficient decision processes and industrial productivity, the risks may include a potential progressive disengagement of human beings in crucial aspects of decision-making. In this respect, a new perspective is emerging that aims at reconsidering the centrality of human beings while reaping the benefits of AI systems to augment rather than replace professional skills: Human-Centred AI (HCAI) is a novel framework that posits that high levels of human control do not contradict high levels of computer automation. In this paper, we investigate the two antipodes, automation vs augmentation, in the context of website usability evaluation. Specifically, we have analyzed whether the level of automation provided by a tool for semi-automatic usability evaluation can support evaluators in identifying usability problems. Three different visualizations, each one corresponding to a different level of automation, ranging from a full-automation approach to an augmentation approach, were compared in an experimental study. We found that a fully automated approach could help evaluators detect a significant number of medium and high-severity usability problems, which are the most critical in a software system; however, it also emerged that it was possible to detect more low-severity usability problems using one of the augmented approaches proposed in this paper.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/474180
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact