2022
DOI: 10.1177/1525822x221100904
|View full text |Cite
|
Sign up to set email alerts
|

Using Attributes of Survey Items to Predict Response Times May Benefit Survey Research

Abstract: Researchers have become increasingly interested in response times to survey items as a measure of cognitive effort. We used machine learning to develop a prediction model of response times based on 41 attributes of survey items (e.g., question length, response format, linguistic features) collected in a large, general population sample. The developed algorithm can be used to derive reference values for expected response times for most commonly used survey items.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 30 publications
(34 reference statements)
0
3
0
Order By: Relevance
“…Our second model was an expanded version of the location-scale model, based on the assumption that different components of intraindividual RT variability can be distinguished from each other that may relate to cognitive abilities in opposite ways. Specifically, survey items vary considerably in difficulty and the cognitive demands associated with them ( Schneider et al forthcoming ). In RT modeling, a concept analogous to item difficulty is the “time intensity” of an item, defined as “the amount of time an item tends to require” ( Kyllonen and Zu 2016, p. 14 ).…”
Section: Methodsmentioning
confidence: 99%
“…Our second model was an expanded version of the location-scale model, based on the assumption that different components of intraindividual RT variability can be distinguished from each other that may relate to cognitive abilities in opposite ways. Specifically, survey items vary considerably in difficulty and the cognitive demands associated with them ( Schneider et al forthcoming ). In RT modeling, a concept analogous to item difficulty is the “time intensity” of an item, defined as “the amount of time an item tends to require” ( Kyllonen and Zu 2016, p. 14 ).…”
Section: Methodsmentioning
confidence: 99%
“…This precision might also lead to the development of minimum thresholds for trial inclusion, which our data did not support, but which could be important for identifying inattentive responders in paid research. Indeed, our proposals for maximum thresholds might be further improved by modeling expected response times at the level of each questionnaire 30 . Second, we analyzed data from an online sample where participants completed modules on their own without supervision.…”
Section: Discussionmentioning
confidence: 99%
“…Following Jensen (2006), we refer to the term complexity as the information load involved in answering a self-report question. Information load cannot be assessed with a single attribute, and we coded 10 different characteristics of each question that are likely related to information load based on prior literature (Bais et al, 2019;Knäuper et al, 1997;Schneider, Jin et al, 2023a;Yan & Tourangeau, 2008) using four approaches, as described below.…”
Section: Indicators Of Question Complexitymentioning
confidence: 99%