2016
DOI: 10.14236/ewic/hci2016.101
|View full text |Cite
|
Sign up to set email alerts
|

“Thinking About Thinking Aloud”: An Investigation of Think-Aloud Methods in Usability Testing

Abstract: Usability has become an imperative aspect of survival on the web, thus, it has always been considered as a crucial aspect of web design. This paper presents the results of a study that compared two think-aloud usability testing methods: the concurrent think-aloud and the retrospective think-aloud methods. Data from task performance, testing experience, and usability problems were collected from 40 participants equally distributed between the two think-aloud conditions. The results found that while the thinking… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
24
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(26 citation statements)
references
References 3 publications
(4 reference statements)
1
24
0
Order By: Relevance
“…Time per problem can be calculated by dividing the time the evaluator spent on a method by the number of problems identified by that method [2]. The CTA method required 29 minutes per usability problem, whereas the RTA method required 68 minutes per usability problem and the HB method required 45 minutes per usability problem.…”
Section: Table 14 Ta Methods and Time Expendedmentioning
confidence: 99%
See 3 more Smart Citations
“…Time per problem can be calculated by dividing the time the evaluator spent on a method by the number of problems identified by that method [2]. The CTA method required 29 minutes per usability problem, whereas the RTA method required 68 minutes per usability problem and the HB method required 45 minutes per usability problem.…”
Section: Table 14 Ta Methods and Time Expendedmentioning
confidence: 99%
“…We considered a number of measures during the process of identifying the usability problem in this study in order to reduce the evaluator effect and to increase the reliability and validity of data [19]. This process is explained in detail in [2]. This subsection presents the results relating to the quantity and quality of usability problem data at the level of individual problems (i.e., problems detected per participant in each condition) and final problems (i.e., the aggregate problems detected in each condition).…”
Section: Usability Problemsmentioning
confidence: 99%
See 2 more Smart Citations
“…They are in charge of welcoming the participant and executing the questionnaires explained in Section 5. The expert practises active listening in a semistructured interview [24,25], and think-aloud methods used in usability testing [26]. • Observer (placed in rear-left seat): In charge of noting every relevant phrase (called a verbatim) pronounced by the driver with their observations and reactions of the driver as a human-factor specialist [27].…”
mentioning
confidence: 99%