During the COVID-19 pandemic, usability practitioners and researchers had to find new approaches to product testing. In-person, contact-intensive product testing became a safety concern, resulting in the need for more remote testing practices. An underexplored and promising method to capture subjective usability measures is Watching Others Using Video, wherein users rate a product after watching videos of others using it. This can have broad application, but previous research found this method yields positively biased usability ratings relative to post-use ratings. This study explored potential factors (e.g., success, error, and failure) that may impact how users perceive satisfaction using this method. To do so, participants were shown videos with different product interactions while systematically varying the factors of interest. Additionally, the effects of the number of errors and error recovery versus failure were explored. Participants watched different videos of the following products being used and rated them using the After-Scenario Questionnaire (ASQ): a website, electric can opener, digital timer. Results found inflated satisfaction ratings across products, however the effect did not reach statistical significance for the website. There also was no observable effect of increasing errors or showing failures. This may be attributed to poor error detection or negligible error severities. Further research is needed for Watching Others Using Video to be accurately implemented as a viable testing method.
The Usefulness, Satisfaction, and Ease of Use Questionnaire (USE) is a 30-item measure of subjective usability. The content of the USE allows the usability of a product to be interpreted along four important dimensions—Usefulness, Ease of Use, Ease of Learning, and Satisfaction—rather than as a global construct. Although the USE has been a valuable tool for human factors professionals, previous analyses revealed the need for psychometrically refining the measure. Additionally, the USE is too long for many usability testing protocols that are already time consuming. The current study addresses these issues by developing the USE-Lite, which reduces the USE psychometrically, then submits it to subsequent validation efforts. Regarding the latter, 194 participants evaluated Amazon.com and Microsoft Word using the USE-Lite and the System Usability Scale (SUS). Across products, the USE-Lite had very high reliability overall and when separated by dimension. Criterion-related validity was demonstrated by correlating the USE-Lite dimensions and SUS yielding r =.40 to.76. In terms of dimensionality, a principal axis factor analysis was performed on both product data sets. Results revealed four factors generally aligning with the original dimensions (Usefulness, Ease of Use, Ease of Learning, and Satisfaction). Further research and potential item refinement are needed to fully assess the practical application of the USE-Lite.
Previous work has investigated the need for domain specific heuristics. Nielsen’s ten heuristics offer a general list of principles, but those principles may not capture usability issues specific to a given interface. Studies have demonstrated methods to establish a domain specific heuristic set, but very little research has been conducted on interfaces in the physical environment, creating a gap in the state-of-the-art. The research described in this paper aims to address this gap by developing an environmental heuristic set; the heuristic set was developed specifically for the Houston light rail system, METRORail. Following development, the heuristic set was validated against Nielsen’s more general heuristics through several field studies. Results highlighted that there were significantly more usability issues identified when using the environment-based heuristics than the general heuristics. This suggests that domain specific heuristics provide a framework that allows evaluators to capture usability issues particular to the interface of the physical environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.