2011
DOI: 10.1177/0049124110392533
|View full text |Cite
|
Sign up to set email alerts
|

Estimating Propensity Adjustments for Volunteer Web Surveys

Abstract: Panels of persons who volunteer to participate in Web surveys are used to make estimates for entire populations, including persons who have no access to the Internet. One method of adjusting a volunteer sample to attempt to make it representative of a larger population involves randomly selecting a reference sample from the larger population. The act of volunteering is treated as a quasi-random process where each person has some probability of volunteering. One option for computing weights for the volunteers i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
109
0
2

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 130 publications
(112 citation statements)
references
References 19 publications
1
109
0
2
Order By: Relevance
“…To meet the need for psychographic or attitudinal variables, some companies and institutes use Reference Surveys, which are also discussed in the literature as alternatives for bias reduction (Valliant & Dever, 2011).…”
Section: Nonprobability Panelsmentioning
confidence: 99%
“…To meet the need for psychographic or attitudinal variables, some companies and institutes use Reference Surveys, which are also discussed in the literature as alternatives for bias reduction (Valliant & Dever, 2011).…”
Section: Nonprobability Panelsmentioning
confidence: 99%
“…As a consequence, it seems that, considering the absence of an adequate sampling International Journal of Social Research Methodology 3 frame and the applied self-selection recruitment methods, the basic methodological condition for generalizing for the whole population can hardly be met for a volunteer web survey. While the question of representativeness and methodological solutions for volunteer web surveys has been analyzed predominantly for one or a few developed countries (Bethlehem, 2009;Bethlehem & Biffignandi, 2012;Lee, 2006;Loosveldt & Sonck, 2008;Schonlau, van Soest, Kapteyn, & Couper, 2009;Valliant & Dever, 2011), the question remains whether similar patterns of sample bias can be detected for developing countries. An answer to this question is particularly urgent, as emerging web surveys in developing countries will more often be of a non-probability nature due to the lack of proper sampling frames, but when sampling frames improve, letter invitations to participate in a survey will more often be used.…”
Section: Challenges To Collecting Survey Data Off-line and Onlinementioning
confidence: 99%
“…Furthermore, using a propensity score (in complement of a calibration procedure or not) is not a guarantee to get smaller biases or variances and there are limited evidence that using nondemographic variables is of great help (AAPOR 2013;Lee 2006). We also used a very large set of sociodemographic variables for our calibration, some of them being clearly linked to the variables of interest (age, household composition, education, and employment) that may lead to substantial bias and variance reduction (Dever et al 2008;Valliant and Dever 2011).…”
Section: Selection Bias and Data Collection Mode Effect Correctionmentioning
confidence: 99%
“…These latter variables, generally opinions on various social subjects or Internet practice, may help correcting the selection bias of the online recruitment and of the online survey. However, a poststratification or a calibration procedure is still needed after or before this method-evidences are not very clear on the order (Loosveldt and Sonck 2008;Valliant and Dever 2011). The conclusion of the AAPOR report is that the results of both methods are mitigated, depending on the thematic of the surveys, the variables used for the weighting, the online survey characteristics, and so on.…”
Section: Introductionmentioning
confidence: 99%