It is generally acknowledged that the only way to actually determine the quality of interactive systems is to perform a usability evaluation. This proves especially true for multimedia systems, where using concurrent media generates further usability problems. We aim to develop effective evaluation methods to meet industry demand and address concerns about the lack of cost-effective methods. These issues prevent most companies from performing usability evaluation, resulting in poorly designed and unusable software. Among the various usability evaluation approaches, usability inspection methods are gaining popularity because they cost less than traditional lab-based usability evaluations. These methods involve expert evaluators only, who inspect the application and, based on their knowledge, provide judgments about the usability of the different application elements. (Examples of inspection methods include heuristic evaluation, cognitive walkthrough, guideline review, and formal usability inspection.1) A drawback of these inspection methods is that the results depend on the inspectors’ skills, experience, and ability. On the other hand, training inspectors proves difficult and expensive. Another limitation is that the techniques mostly focus on surface-oriented features related to the graphical interface. Few approaches focus on the application structure or on the organization of information elements and functionality.2 Also, most methods are too general. Although they’re valid for any type of interactive system, they often neglect some intrinsic features. Our research addresses evaluating the usability of hypermedia systems—both offline (CD-ROMs) and online (Web)—and tries to capture the features that most characterize the specific nature of these systems. Here, we describe an inspection technique that lets evaluators concentrate on the usability of specific aspects of hypermedia applications, such as information and navigation structuring, media integration and synchronization, and so on, without neglecting the surface aspects. Our technique uses operational guidelines, called Abstract Tasks (ATs), which systematically drive the inspection activities, allowing even less experienced evaluators to come up with valuable results.
Guidelines for Hypermedia Usability Inspection
COSTABILE, Maria;
2001-01-01
Abstract
It is generally acknowledged that the only way to actually determine the quality of interactive systems is to perform a usability evaluation. This proves especially true for multimedia systems, where using concurrent media generates further usability problems. We aim to develop effective evaluation methods to meet industry demand and address concerns about the lack of cost-effective methods. These issues prevent most companies from performing usability evaluation, resulting in poorly designed and unusable software. Among the various usability evaluation approaches, usability inspection methods are gaining popularity because they cost less than traditional lab-based usability evaluations. These methods involve expert evaluators only, who inspect the application and, based on their knowledge, provide judgments about the usability of the different application elements. (Examples of inspection methods include heuristic evaluation, cognitive walkthrough, guideline review, and formal usability inspection.1) A drawback of these inspection methods is that the results depend on the inspectors’ skills, experience, and ability. On the other hand, training inspectors proves difficult and expensive. Another limitation is that the techniques mostly focus on surface-oriented features related to the graphical interface. Few approaches focus on the application structure or on the organization of information elements and functionality.2 Also, most methods are too general. Although they’re valid for any type of interactive system, they often neglect some intrinsic features. Our research addresses evaluating the usability of hypermedia systems—both offline (CD-ROMs) and online (Web)—and tries to capture the features that most characterize the specific nature of these systems. Here, we describe an inspection technique that lets evaluators concentrate on the usability of specific aspects of hypermedia applications, such as information and navigation structuring, media integration and synchronization, and so on, without neglecting the surface aspects. Our technique uses operational guidelines, called Abstract Tasks (ATs), which systematically drive the inspection activities, allowing even less experienced evaluators to come up with valuable results.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


