Although agree-disagree (AD) rating scales suffer from acquiescence response bias, entail enhanced cognitive burden, and yield data of lower quality (Krosnick, 1991; Saris, Revilla, Krosnick, Schaeffer, forthcoming), these scales remain popular with researchers due to practical considerations (e.g., ease of item preparation, speed of administration, reduced administration costs). This paper shows that if researchers want to use AD scales, they should offer 5 answer categories rather than 7 or 11, because the latter yield data of lower quality. This is shown using data from four multitraitmultimethod (MTMM) experiments implemented in the third round of the European Social Survey. The quality of items with different rating scale lengths were computed and compared.
Purpose Despite the quick spread of the use of mobile devices in survey participation, there is still little knowledge about the potentialities and challenges that arise from this increase. The purpose of this paper is to study how respondents’ preferences drive their choice of a certain device when participating in surveys. Furthermore, this paper evaluates the tolerance of participants when specifically asked to use mobile devices and carry out other specific tasks, such as taking photographs. Design/methodology/approach Data were collected by surveys in Spain, Portugal and Latin America by Netquest, an online fieldwork company. Findings Netquest panellists still mainly preferred to participate in surveys using personal computers. Nevertheless, the use of tablets and smartphones in surveys showed an increasing trend; more panellists would prefer mobile devices, if the questionnaires were adapted to them. Most respondents were not opposed to the idea of participating in tasks such as taking photographs or sharing GPS information. Research limitations/implications The research concerns an opt-in online panel that covers a specific area. For probability-based panels and other areas the findings may be different. Practical implications The findings show that online access panels need to adapt their surveys to mobile devices to satisfy the increasing demand from respondents. This will also allow new, and potentially very interesting data collection methods. Originality/value This study contributes to survey methodology with updated findings focusing on a currently underexplored area. Furthermore, it provides commercial online panels with useful information to determine their future strategies.
Background Individual behavior, particularly choices about prevention, plays a key role in infection transmission of vector-borne diseases (VBDs). Since the actual risk of infection is often uncertain, individual behavior is influenced by the perceived risk. A low risk perception is likely to diminish the use of preventive measures (behavior). If risk perception is a good indicator of the actual risk, then it has important implications in a context of disease elimination. However, more research is needed to improve our understanding of the role of human behavior in disease transmission. The objective of this study is to explore whether preventive behavior is responsive to risk perception, taking into account the links with disease knowledge and controlling for individuals' socioeconomic and demographic characteristics. More specifically, the study focuses on malaria, dengue fever, Zika and cutaneous leishmaniasis (CL), using primary data collected in Guyana-a key country for the control and/or elimination of VBDs, given its geographic location. Methods and findings The data were collected between August and December 2017 in four regions of the country. Questions on disease knowledge, risk perception and self-reported use of preventive measures were asked to each participant for the four diseases. A structural equation model was estimated. It focused on data collected from private households only in order to control for individuals' socioeconomic and demographic characteristics, which led to a sample size of 497 participants. The findings showed evidence of a bidirectional association between risk perception and behavior. A one-unit increase in risk perception translated into a 0.53 unit PLOS NEGLECTED TROPICAL DISEASES
Surveys have been used as main tool of data collection in many areas of research and for many years. However, the environment is changing increasingly quickly, creating new challenges and opportunities. This article argues that, in this new context, human memory limitations lead to inaccurate results when using surveys in order to study objective online behavior: People cannot recall everything they did. It therefore investigates the possibility of using, in addition to survey data, passive data from a tracking application (called a ''meter'') installed on participants' devices to register their online behavior. After evaluating the extent of some of the main drawbacks linked to passive data collection with a case study (Netquest metered panel in Spain), this article shows that the data from the web survey and the meter lead to very different results about the online behavior of the same sample of respondents, showing the need to combine several sources of data collection in the future.
Evaluating the quality of the data is a key preoccupation for researchers to be confident in their results. When web surveys are used, it seems even more crucial since the researchers have less control on the data collection process. However, they also have the possibility to collect some paradata that may help evaluating the quality. Using this paradata, it was noticed that some respondents of web panels are spending much less time than expected to complete the surveys. This creates worries about the quality of the data obtained. Nevertheless, not much is known about the link between response times (RTs) and quality. Therefore, the goal of this study is to look at the link between the RTs of respondents in an online survey and other more usual indicators of quality used in the literature: properly following an instructional manipulation check, coherence and precision of answers, absence of straight-lining, and so on. Besides, we are also interested in the link of RT and the quality indicators with respondents’ auto-evaluation of the efforts they did to answer the survey. Using a structural equation modeling approach that allows separating the structural and the measurement models and controlling for potential spurious effects, we find a significant relationship between RT and quality in the three countries studied. We also find a significant, but lower, relationship between RT and auto-evaluation. However, we did not find a significant link between auto-evaluation and quality.
Much research has been done comparing grids and item-by-item formats. However, the results are mixed, and more research is needed especially when a significant proportion of respondents answer using smartphones. In this study, we implemented an experiment with seven groups (n ¼ 1,476), varying the device used (PC or smartphone), the presentation of the questions (grids, item-by-item vertical, item-by-item horizontal), and, in the case of smartphones only, the visibility of the "next" button (always visible or only visible at the end of the page, after scrolling down). The survey was conducted by the Netquest online fieldwork company in Spain in 2016. We examined several outcomes for three sets of questions, which are related to respondent behavior (completion time, lost focus, answer changes, and screen orientation) and data quality (item missing data, nonsubstantive responses, instructional manipulation check failure, and nondifferentiation). The most striking difference found is for the placement of the next button in the smartphone item-by-item conditions: When the button is always visible, item missing data are substantially higher.
Most mobile devices nowadays have a camera. Besides, posting and sharing images have been found as one of the most frequent and engaging Internet activities. However, to our knowledge, no research has explored the feasibility of asking respondents of online surveys to upload images to answer survey questions. The main goal of this article is to investigate the viability of asking respondents of an online opt-in panel to upload during a mobile web survey: First, a photo taken in the moment, and second, an image already saved on their smartphone. In addition, we want to test to what extent the Google Vision application programming interface (API), which can label images into categories, produces similar tags than a human coder. Overall, results from a survey conducted among millennials in Spain and Mexico ( N = 1,614) show that more than half of the respondents uploaded an image. Of those, 77.3% and 83.4%, respectively, complied with what the question asked. Moreover, respectively, 52.4% and 65.0% of the images were similarly codified by the Google Vision API and the human coder. In addition, the API codified 1,818 images in less than 5 min, whereas the human coder spent nearly 35 hours to complete the same task.
Passive data from a tracking application (or “meter”) installed on participants’ devices to register the URLs visited have great potential for studying people’s online activities. However, given privacy concerns, obtaining cooperation installing a meter can be difficult and lead to selection bias. Therefore, in this article, we address three research questions: (1) To what extent are panelists willing to install a meter? (2) On which devices do they install the meter? (3) How do panelists who installed the meter differ from those who were invited but did not install it? Using data from online non-probability opt-in panels in nine countries, we found that the proportions of panelists installing the meter usually vary from 20% to 42%. Moreover, 20–25% of participants installed the meter on three or more devices. Finally, those who were invited but did not install the meter differ from those who did.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.