2022
DOI: 10.1017/xps.2022.8
|View full text |Cite
|
Sign up to set email alerts
|

Fraud in Online Surveys: Evidence from a Nonprobability, Subpopulation Sample

Abstract: We hired a well-known market research firm whose surveys have been published in leading political science journals, including JEPS. Based on a set of rigorous “screeners,” we detected what appears to be exceedingly high rates of identity falsification: over 81 percent of respondents seemed to misrepresent their credentials to gain access to the survey and earn compensation. Similarly high rates of presumptive character falsification were present in panels from multiple sub-vendors procured by the firm. Moreove… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 15 publications
(19 reference statements)
0
1
0
Order By: Relevance
“…To minimize topical selection bias, we did not inform respondents of the purpose of the survey when they entered it, and questions covered a broad range of topics, mostly related to public health (the questions regarding guns also came late in the survey, making self-selection based on guns very unlikely). We filtered out inattentive, repeat, and semi-automated respondents through multiple closed-and open-ended attention checks to address growing concerns about fraud and inattention in online panels (Bell and Gift 2022). Emerging evidence suggests this general approach to data collection can perform as well as traditional probability sampling (Enns and Rothschild 2021;Lehdonvirta et al 2021).…”
Section: Methodsmentioning
confidence: 99%
“…To minimize topical selection bias, we did not inform respondents of the purpose of the survey when they entered it, and questions covered a broad range of topics, mostly related to public health (the questions regarding guns also came late in the survey, making self-selection based on guns very unlikely). We filtered out inattentive, repeat, and semi-automated respondents through multiple closed-and open-ended attention checks to address growing concerns about fraud and inattention in online panels (Bell and Gift 2022). Emerging evidence suggests this general approach to data collection can perform as well as traditional probability sampling (Enns and Rothschild 2021;Lehdonvirta et al 2021).…”
Section: Methodsmentioning
confidence: 99%
“…In the original publication of Bell and Gift (2022) a non-attribution statement for author Andrew Bell was inadvertently excluded. The non-attribution statement for Bell reads as follows:…”
mentioning
confidence: 99%