In this study, we investigated whether incorporating eye tracking into cognitive interviewing is effective when pretesting survey questions. In the control condition, a cognitive interview was conducted using a standardized interview protocol that included pre-defined probing questions for about one-quarter of the questions in a 52-item questionnaire. In the experimental condition, participants' eye movements were tracked while they completed an online version of the questionnaire. Simultaneously, their reading patterns were monitored for evidence of response problems. Afterward, a cognitive interview was conducted using an interview protocol identical to that in the control condition. We compared both approaches with regard to the number and types of problems they detected. We found support for our hypothesis that cognitive interviewing and eye tracking complement each other effectively. As expected, the hybrid method was more productive in identifying both questionnaire problems and problematic questions than applying cognitive interviewing alone.
Pretesting survey questions via cognitive interviewing is based on the assumptions that the problems identified by the method truly exist in a later survey and that question revisions based on cognitive interviewing findings produce higher-quality data than the original questions. In this study, we empirically tested these assumptions in a web survey experiment (n = 2,200). Respondents received one of two versions of a question on self-reported financial knowledge: either the original draft version, which was pretested in ten cognitive interviews, or a revised version, which was modified based on the results of the cognitive interviews. We examined whether the cognitive interviewing findings predicted problems encountered in the web survey and whether the revised question version was associated with higher content-related and criterion-related validity than the draft version. The results show that cognitive interviewing is effective in identifying real question problems, but not necessarily in fixing survey questions and improving data quality. Overall, our findings point to the importance of using iterative pretesting designs, that is, carrying out multiple rounds of cognitive interviews and also testing the revisions to ensure that they are indeed of higher quality than the draft questions.
The method of web probing integrates cognitive interviewing techniques into web surveys and is increasingly used to evaluate survey questions. In a usual web probing scenario, probes are administered immediately after the question to be tested (concurrent probing), typically as open-ended questions. A second possibility of administering probes is in a closed format, whereby the response categories for the closed probes are developed during previously conducted qualitative cognitive interviews. Using closed probes has several benefits, such as reduced costs and time efficiency, because this method does not require manual coding of open-ended responses. In this article, we investigate whether the insights gained into item functioning when implementing closed probes are comparable to the insights gained when asking open-ended probes and whether closed probes are equally suitable to capture the cognitive processes for which traditionally open-ended probes are intended. The findings reveal statistically significant differences with regard to the variety of themes, the patterns of interpretation, the number of themes per respondent, and nonresponse. No differences in number of themes across formats by sex and educational level were found.
Cognitive online pretests have, in recent years, become recognized as a promising tool for evaluating questions prior to their use in actual surveys. While existing research has shown that cognitive online pretests produce similar results to face-to-face cognitive interviews with regard to the problems detected and the item revisions suggested, little is known about the ideal design of a cognitive online pretest. This study examines whether the number of open-ended probing questions asked during a cognitive online pretest has an effect on the quality and depth of respondents’ answers as well as on respondents’ satisfaction with the survey. We conducted an experiment in which we varied the number of open-ended probing questions that respondents received during a cognitive online pretest. The questionnaire consisted of 26 survey questions, and respondents received either 13 probing questions ( n = 120, short version) or 21 probing questions ( n = 120, long version). The findings suggest that asking a greater number of open-ended probes in a cognitive online pretest does not undermine the quality of respondents’ answers represented by the response quality indicators: (1) amount of probe nonresponse, (2) number of uninterpretable answers, (3) number of dropouts, (4) number of words, (5) response times, and (6) number and type of themes covered by the probes. Furthermore, the respondents’ satisfaction with the survey is not affected by the number of probes being asked.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.