To better support usability practice, most usability research focuses on evaluation methods. New ideas in usability research are mostly proposed as new evaluation methods. Many publications describe experiments that compare methods. Comparisons may indicate that some methods have important deficiencies, and thus often advise usability practitioners to prefer a specific method in a particular situation. An expectation persists in human-computer interaction (HCI) that results about evaluation methods should be the standard "unit of contribution" rather than favoring larger units (e.g., usability work as a whole) or smaller ones (e.g., the impact of specific aspects of a method). This article argues that these foci on comparisons and method innovations ignore the reality that usability evaluation methods are loose incomplete collections of resources, which successful practitioners configure, adapt, and complement to match specific project circumstances. Through a review of existing research on methods and resources, resources associated with specific evaluation methods, and ones that can complement existing methods, or be used separately, are identified. Next, a generic classification scheme for evaluation resources is developed, and the scheme is extended with project specific resources that impact the effective use of methods. With these reviews and analyses in place, implications for research, teaching, and practice are derived. Throughout, the article draws on culinary analogies. A recipe is nothing without its ingredients, and just as the quality of what is cooked reflects the quality of its ingredients, so too does the quality of usability work reflect the quality of resources as configured and combined. A method, like a recipe,
businessIf you ask someone outside the Human-Computer Interaction (HCI) field about usability, many will mention the "classic" discount methods popularized by Jakob Nielsen and others. Discount methods have the appeal of seeming easy to do, and, more importantly for business, being inexpensive. This is especially attractive to smaller startup companies with low budgets. But are discount methods really too risky to justify the "low" cost? This month's business column authors think so, based on their research and experience. Indeed, they believe that these discount methods may actually backfire and end up discrediting the field. Following a lively discussion on the CHI-WEB listserv, we asked them to explain what they see the risks to be, and what they believe we, as a profession, can and should do about it.
Usability inspection methods (UIMs) remain an important discount method for usability evaluation. They can be applied to any designed artefact during development: a paper prototype, a storyboard, a working prototype (e.g., in Macromedia Flash™ or in Microsoft PowerPoint™), tested production software, or an installed public release. They are analytical evaluation methods, which involve no typical end users, unlike empirical methods such as user testing. UIMs only require availability of a designed artefact and trained analysts. Thus, evaluation is possible with low resources (hence discount methods). Although risks arise from low resources, well-informed practices disproportionately improve analyst performance, improving cost-benefit ratios. This chapter introduces UIMs, covering six and one further method, and provides approaches to assessing existing, emerging and future UIMs and their effective uses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.