The rising penetration of smartphones now gives researchers the chance to collect data from smartphone users through passive mobile data collection via apps. Examples of passively collected data include geolocation, physical movements, online behavior and browser history, and app usage. However, to passively collect data from smartphones, participants need to agree to download a research app to their smartphone. This leads to concerns about nonconsent and nonparticipation. In the current study, we assess the circumstances under which smartphone users are willing to participate in passive mobile data collection. We surveyed 1,947 members of a German nonprobability online panel who own a smartphone using vignettes that described hypothetical studies where data are automatically collected by a research app on a participant’s smartphone. The vignettes varied the levels of several dimensions of the hypothetical study, and respondents were asked to rate their willingness to participate in such a study. Willingness to participate in passive mobile data collection is strongly influenced by the incentive promised for study participation but also by other study characteristics (sponsor, duration of data collection period, option to switch off the app) as well as respondent characteristics (privacy and security concerns, smartphone experience).
There is an ongoing debate in the survey research literature about whether and when probability and nonprobability sample surveys produce accurate estimates of a larger population. Statistical theory provides a justification for confidence in probability sampling as a function of the survey design, whereas inferences based on nonprobability sampling are entirely dependent on models for validity. This article reviews the current debate about probability and nonprobability sample surveys. We describe the conditions under which nonprobability sample surveys may provide accurate results in theory and discuss empirical evidence on which types of samples produce the highest accuracy in practice. From these theoretical and empirical considerations, we derive best-practice recommendations and outline paths for future research.
Various open probability-based panel infrastructures have been established in recent years, allowing researchers to collect high-quality survey data. In this report, we describe the processes and deliverables of setting up the GESIS Panel, the first probability-based mixed-mode panel infrastructure in Germany open for data collection to the academic research community. The reference population for the GESIS Panel is the German-speaking population aged between 18 and 70 years permanently residing in Germany. In 2013, approximately 5,000 panelists had been recruited from a random sample drawn from municipal population registers. We describe the outcomes of the sampling strategy and the multistep recruitment process, involving computer-aided personal interviews conducted at respondents’ homes. Next, we describe the outcomes of the two self-administered survey modes (online and paper-and-pencil) of the GESIS Panel used for the initial profile survey and all subsequent bimonthly data collection waves. Across all stages of setting up the GESIS Panel, we report sample composition discrepancies for key demographic variables between the GESIS Panel and established benchmark surveys. Overall, the findings highlight the usefulness of pursuing a mixed-mode strategy when building a probability-based panel infrastructure in Germany.
The growing smartphone penetration and the integration of smartphones into people’s everyday practices offer researchers opportunities to augment survey measurement with smartphone-sensor measurement or to replace self-reports. Potential benefits include lower measurement error, a widening of research questions, collection of in situ data, and a lowered respondent burden. However, privacy considerations and other concerns may lead to nonparticipation. To date, little is known about the mechanisms of willingness to share sensor data by the general population, and no evidence is available concerning the stability of willingness. The present study focuses on survey respondents’ willingness to share data collected using smartphone sensors (GPS, camera, and wearables) in a probability-based online panel of the general population of the Netherlands. A randomized experiment varied study sponsor, framing of the request, the emphasis on control over the data collection process, and assurance of privacy and confidentiality. Respondents were asked repeatedly about their willingness to share the data collected using smartphone sensors, with varying periods before the second request. Willingness to participate in sensor-based data collection varied by the type of sensor, study sponsor, order of the request, respondent’s familiarity with the device, previous experience with participating in research involving smartphone sensors, and privacy concerns. Willingness increased when respondents were asked repeatedly and varied by sensor and task. The timing of the repeated request, one month or six months after the initial request, did not have a significant effect on willingness.
In this article, we investigate changes in survey reporting due to prior interviewing. Two field experiments were implemented in a probability-based online panel in which the order of the questionnaires was switched. Although experimental methods for studying panel conditioning are favorable, experiments in longitudinal studies are rare. Studies on conditioning demand additional resources and might influence respondents' answers. Panel conditioning is mostly associated with measurement errors. However, the discussion that sees it exclusively as a negative phenomenon is not comprehensive. Learning the rules of the interview may lead to increases or decreases in data quality (advantageous vs. disadvantageous conditioning). Overall, little evidence of advantageous conditioning and no disadvantageous conditioning is found. Apart from this reassuring finding, this aricle advances the field by using propensity score weighting to account for attrition and other confounding factors and by using paradata to evaluate the plausibility of alternative explanations of panel conditioning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.