2016
DOI: 10.1016/j.jrp.2016.04.010
|View full text |Cite
|
Sign up to set email alerts
|

Detecting careless respondents in web-based questionnaires: Which method to use?

Abstract: High data quality is an important prerequisite for sound empirical research. Meade and Craig (2012) and Huang, Curran, Keeney, Poposki, and DeShon (2012) discussed methods to detect unmotivated or careless respondents in large web-based questionnaires. We first discuss these methods and present multi-test extensions of person-fit statistics as alternatives. Second, we applied these methods to data collected through a web-based questionnaire, in which some respondents received instructions to respond quickly wh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
196
1
1

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 155 publications
(221 citation statements)
references
References 28 publications
(67 reference statements)
7
196
1
1
Order By: Relevance
“…The use of response times to detect the aberrant answering of assessment items is well known 811,1320,5157. To our knowledge, no previous research has evaluated the utility of SOAPP-R completion times to predict ADB, despite the fact that respondent deception has been identified as a concern for this questionnaire 7.…”
Section: Discussionmentioning
confidence: 99%
“…The use of response times to detect the aberrant answering of assessment items is well known 811,1320,5157. To our knowledge, no previous research has evaluated the utility of SOAPP-R completion times to predict ADB, despite the fact that respondent deception has been identified as a concern for this questionnaire 7.…”
Section: Discussionmentioning
confidence: 99%
“…In the literature, some suggest that responses should be removed once identified as aberrant (e.g. Curran, 2016;Niessen, Meijer, & Tendeiro, 2016). However, data removal can be controversial.…”
Section: Methodsmentioning
confidence: 99%
“…Some recently proposed screening techniques could not be included in the present analysis due to the nature of the sample and data. Guttman errors and IRT-informed response probabilities are gaining popularity (see, Curran, 2016;Niessen, Meijer, & Tendeiro, 2016). However, as these techniques are computed using various operationalizations of item difficulty and/or discrimination, they are most appropriate for use with unidimensional scales containing many items (Meijer, Monelaar, & Sijtsma, 1994).…”
Section: Limitations and Future Directionsmentioning
confidence: 99%
“…Absent a much larger sample or normative data, estimates of item parameters would be too unstable to be useful for computing these indices (De Ayala & Sava-Bolesta, 1999;DeMars, 2003). Niessen et al (2016) examined the cutoff values and relationships of these indices with some of the techniques assessed in the current paper. Future research should examine the distributional characteristics of these techniques as well as their potential impact on item inter-relationships or the results of statistical analyses (research questions 3, 4, and 5 above).…”
Section: Limitations and Future Directionsmentioning
confidence: 99%