AbstractDo web surveys still yield lower response rates compared with other survey modes? To answer this question, we replicated and extended a meta-analysis done in 2008 which found that, based on 45 experimental comparisons, web surveys had an 11 percentage points lower response rate compared with other survey modes. Fundamental changes in internet accessibility and use since the publication of the original meta-analysis would suggest that people’s propensity to participate in web surveys has changed considerably in the meantime. However, in our replication and extension study, which comprised 114 experimental comparisons between web and other survey modes, we found almost no change: web surveys still yielded lower response rates than other modes (a difference of 12 percentage points in response rates). Furthermore, we found that prenotifications, the sample recruitment strategy, the survey’s solicitation mode, the type of target population, the number of contact attempts, and the country in which the survey was conducted moderated the magnitude of the response rate differences. These findings have substantial implications for web survey methodology and operations.
A major challenge in web-based cross-cultural data collection is varying response rates, which can result in low data quality and non-response bias. Country-specific factors such as the political and demographic, economic, and technological factors as well as the socio-cultural environment may have an effect on the response rates to web surveys. This study evaluates web survey response rates using meta-analytical methods based on 110 experimental studies from seven countries. Three dependent variables, so-called effect sizes, are used: the web response rate, the response rate to the comparison survey mode, and the difference between the two response rates. The meta-analysis indicates that four country-specific factors (political and demographic, economic, technological, and socio-cultural) impact the magnitude of web survey response rates. Specifically, web surveys achieve high response rates in countries with high population growth, high internet coverage, and a high survey participation propensity. On the other hand, web surveys are at a disadvantage in countries with a high population age and high cell phone coverage. This study concludes that web surveys can be a reliable alternative to other survey modes due to their consistent response rates and are expected to be used more frequently in national and international settings.
Attrition poses an important challenge for panel surveys. With respect to these surveys, respondents’ decisions about whether to participate in reinterviews are affected by their participation in prior waves of the panel. However, in self-administered mixed-mode panels, the way of experiencing a survey differs between the mail mode and the web mode. Consequently, this study investigated how respondents’ prior experience with the characteristics of a survey—such as length, difficulty, interestingness, sensitivity, and the diversity of the questionnaire—affects their informed decision about whether to participate again or not. We found that the length of a questionnaire seems to be of such importance to respondents that they base their participation on this characteristic, regardless of the mode. Our findings also suggest that the difficulty and diversity of questionnaires are readily accessible information that respondents use in the mail mode when making a decision about whether to participate again, whereas these characteristics have no effect in the web mode. In addition, privacy concerns have an impact in the web mode but not in the mail mode.
AbstractMany surveys aim to achieve high response rates to keep bias due to nonresponse low. However, research has shown that the relationship between the nonresponse rate and nonresponse bias is small. In fact, high response rates may lead to measurement error, if respondents with low response propensities provide survey responses of low quality. In this paper, we explore the relationship between response propensity and measurement error, specifically, motivated misreporting, the tendency to give inaccurate answers to speed through an interview. Using data from four surveys conducted in several countries and modes, we analyze whether motivated misreporting is worse among those respondents who were the least likely to respond to the survey. Contrary to the prediction of our theoretical model, we find only limited evidence that reluctant respondents are more likely to misreport.
Filter questions are used to administer follow-up questions to eligible respondents while allowing respondents who are not eligible to skip those questions. Filter questions can be asked in either the interleafed or the grouped formats. In the interleafed format, the follow-ups are asked immediately after the filter question; in the grouped format, follow-ups are asked after the filter question block. Underreporting can occur in the interleafed format due to respondents’ desire to reduce the burden of the survey. This phenomenon is called motivated misreporting. Because smartphone surveys are more burdensome than web surveys completed on a computer or laptop, due to the smaller screen size, longer page loading times, and more distraction, we expect that motivated misreporting is more pronounced on smartphones. Furthermore, we expect that misreporting occurs not only in the filter questions themselves but also extends to data quality in the follow-up questions. We randomly assigned 3,517 respondents of a German online access panel to either the PC or the smartphone. Our results show that while both PC and smartphone respondents trigger fewer filter questions in the interleafed format than the grouped format, we did not find differences between PC and smartphone respondents regarding the number of triggered filter questions. However, smartphone respondents provide lower data quality in the follow-up questions, especially in the grouped format. We conclude with recommendations for web survey designers who intend to incorporate smartphone respondents in their surveys.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.