2020
DOI: 10.31235/osf.io/s45yg
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Generalizability of Online Experiments Conducted During The COVID-19 Pandemic

Abstract: The disruptions of the COVID-19 pandemic led many social scientists toward online survey experimentation for empirical research. Generalizing from the experiments conducted during a period of persistent crisis may be challenging due to changes in who participates in online survey research and how the participants respond to treatments. We investigate the generalizability of COVID-era survey experiments with 33 replications of 12 pre-pandemic designs fielded across 13 surveys on American survey respondents obta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
29
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
2
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 38 publications
(30 citation statements)
references
References 25 publications
1
29
0
Order By: Relevance
“…Moreover, as a web-based survey, we cannot estimate a response rate as with more traditional survey designs; however, we note that surveys using similar methods have demonstrated replicable results during COVID-19. 5 As respondents did not see the survey topic until entering the survey itself, it is unlikely our results are enriched for those with particular interest in, or impact from, COVID-19.…”
Section: Discussionmentioning
confidence: 98%
“…Moreover, as a web-based survey, we cannot estimate a response rate as with more traditional survey designs; however, we note that surveys using similar methods have demonstrated replicable results during COVID-19. 5 As respondents did not see the survey topic until entering the survey itself, it is unlikely our results are enriched for those with particular interest in, or impact from, COVID-19.…”
Section: Discussionmentioning
confidence: 98%
“…Arechar and Rand (2020) and Aronow et al (2020) provide evidence of rising inattention among survey takers sourced from Amazon Mechanical Turk (MTurk) and Lucid throughout 2020. Peyton et al (2020) formalize the conditions under which this inattention uniformly shrinks treatment effect estimates toward zero and, using a series of replication studies fielded in 2020, show that estimates among attentive respondents are closer to previously published estimates. They recommend that researchers conducting online surveys in 2020 -particularly via the Lucid platform -should use attention checks placed near the beginning of the survey to screen out participants that provide low-quality data (e.g., speeding through surveys without reading question content).…”
Section: C1 Exclusion Of Inattentive Survey Respondentsmentioning
confidence: 56%
“…Given recent declines in attentiveness among online survey respondents we removed inattentive respondents at the beginning of the survey (i.e., pre-treatment; Aronow et al 2020;Peyton et al 2020; see Online Appendix C.1). Additionally, we constructed survey weights to adjust for differences in respondent demographics using target proportions from the American Community Survey (see Online Appendix C.2).…”
Section: Data and Resultsmentioning
confidence: 99%
See 2 more Smart Citations