Research problem: Concurrent think-aloud (CTA) protocols are one of the dominant approaches of usability testing. However, there is still debate about the validity of the method, partly focusing on the usefulness and exhaustiveness of participants' verbalizations. The rise of eye-tracking technology sheds new light on this discussion, as participants' working processes can now be observed in more detail. Research questions: (1) What kinds of verbalizations do participants produce, and how do they relate to the information that can be directly observed using eye tracking? (2) What do eye movements reveal about cognitive processes at times when participants stop verbalizing? Literature review: Our study replicates an earlier study by Cooke (2010), who used a combination of CTA protocols and eye tracking in a small sample with experienced and highly educated participants to investigate the validity of CTA. Cooke's results suggest that the additional value of participants' verbalizations is limited: at least 77% of the verbalizations referred to things that could be easily observed with eye tracking. Methodology: We conducted a study in which 60 participants with different characteristics performed tasks on informational websites. During their task performance, they verbalized their thoughts, and simultaneously their eye movements were measured. The resulting think-aloud protocols were divided in verbalization units, which were coded into content types. Silences were registered, and eye movements during these silences were analyzed. Results and discussion: We found a different distribution of verbalization types than Cooke ( 2010) reported, with far more verbalizations where participants formulated doubts, judgments on the website, or expressions of frustration. In our study, verbalizations provided a substantial contribution in addition to the directly observable user problems. We measured a rather high percentage of silences (27%), during which participants most often were scanning pages for information. During these silences, interesting observations could be made about users' processes and obstacles on the website. The implication of our study is that we now have a better understanding of the types of verbalizations that a CTA evaluation might generate. Further, we know that relevant usability observations can be made during silences. A limitation is that we do not know yet the influence of specific characteristics of the evaluation setting on the types of verbalizations and silences. Future research should focus on the influence of evaluation settings on the outcomes of an evaluation, in particular, the influence of characteristics of the participants who are involved in the study.
Online questionnaires are frequently used to monitor the quality of municipal and other governmental websites. In the present situation, many government organizations seem to reinvent the wheel and develop their own questionnaire. This leads to the undesirable situation that website quality is often assessed with instruments that are not comparable with each other and are not empirically validated. This article presents a generic Website Evaluation Questionnaire (WEQ) for the evaluation of informational websites. The WEQ was developed on the basis of the literature on usability and user satisfaction and was tested and revised in several rounds. This has resulted in a reliable questionnaire measuring clearly distinct quality dimensions of informational websites. The WEQ can be used by governmental organizations for evaluating their websites and for benchmarking their results against each other.
Web sites increasingly encourage users to provide comments on the quality of the content by clicking on a feedback button and filling out a feedback form. Little is known about users' abilities to provide such feedback. To guide the development of evaluation tools, this study examines to what extent users with various background characteristics are able to provide useful comments on informational Web sites. Results show that it is important to keep the feedback tools both simple and attractive so that users will be able and willing to provide useful feedback on Web site pages.
The retrospective think-aloud method, in which participants work in silence and verbalize their thoughts afterwards while watching a recording of their performance, is often used for the evaluation of websites. However, participants may not always be able to recall what they thought, when they only see few visual cues that help them remembering their task execution process. In our study we complemented the recording of the performance with a gaze trail of the participants' eye movements, in order to elicit more verbalizations. A comparison was made between the traditional retrospective think-aloud protocols and the variant with eye movements. Contrary to our expectations, no differences were found between the two conditions on numbers of problems, the ways these problems were detected, and types of problems. Two possible explanations for this result are that eye movements might be rather confronting and distracting for participants, and the rather generic way of probing we used. The added value might be stronger when specific questions are asked, based on the observed eye movements. Implications for usability practitioners are discussed in the conclusions of this paper.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.