Survey respondents differ in their levels of attention and effort when responding to items. There are a number of methods researchers may use to identify respondents who fail to exert sufficient effort in order to increase the rigor of analysis and enhance the trustworthiness of study results. Screening techniques are organized into three general categories, which differ in impact on survey design and potential respondent awareness. Assumptions and considerations regarding appropriate use of screening techniques are discussed along with descriptions of each technique. The utility of each screening technique is a function of survey design and administration. Each technique has the potential to identify different types of insufficient effort. An example dataset is provided to illustrate these differences and familiarize readers with the computation and implementation of the screening techniques. Researchers are encouraged to consider data screening when designing a survey, select screening techniques on the basis of theoretical considerations (or empirical considerations when pilot testing is an option), and report the results of an analysis both before and after employing data screening techniques.
We examine the appropriateness of response speed and response consistency as data quality indicators within online samples. Across several inventories, results show that response consistency decreases dramatically at response rates faster than 1 second per item. Our results suggest that careless responding may be fairly common in online samples and often functions to increase the expected correlation between items in a survey, which has implications for the likelihood of false positives and the analysis of factor structure. Given how careless responding can influence estimated associations between variables, we strongly recommend that researchers include response speed and consistency screens in their research and provide empirically informed cut points for data screens that should be useful across a wide range of instruments and settings.
The purpose of this study is to empirically address questions pertaining to the effects of data screening practices in survey research. This study addresses questions about the impact of screening techniques on data and statistical analyses. It also serves an initial attempt to estimate descriptive statistics and graphically display the distributions of popular screening techniques. Data were obtained from an online sample who completed demographic items and measures of character strengths (N = 307). Screening indices demonstrate minimal overlap and differ in the number of participants flagged. Existing cutoff scores for most screening techniques seem appropriate, but cutoff values for consistency-based indices may be too liberal. Screens differ in the extent to which they impact survey results. The use of screening techniques can impact inter-item correlations, inter-scale correlations, reliability estimates, and statistical results. While data screening can improve the quality and trustworthiness of data, screening techniques are not interchangeable. Researchers and practitioners should be aware of the differences between data screening techniques and apply appropriate screens for their survey characteristics and study design. Low-impact direct and unobtrusive screens such as self-report indicators, bogus items, instructed items, longstring, individual response variability, and response time are relatively simple to administer and analyze. The fact that data screening can influence the statistical results of a study demonstrates that low-quality data can distort hypothesis testing in organizational research and practice. We recommend analyzing results both before and after screens have been applied.
Landers and Behrend (2015) are the most recent in a long line of researchers who have suggested that online samples generated from sources such as Amazon's Mechanical Turk (MTurk) are as good as or potentially even better than the typical samples found in psychology studies. It is important that the authors caution that researchers and reviewers need to carefully reflect on the goals of research when evaluating the appropriateness of samples. However, although they argue that certain types of samples should not be dismissed out of hand, they note that there is only scant evidence demonstrating that online sources can provide usable data for organizational research and that there is a need for further research evaluating the validity of these new sources of data. Because the target article does not directly address the potential problems with such samples, we will review what is known about collecting online data (with a particular focus on MTurk) and illustrate some potential problems using data derived from such sources.
Recent years have seen a renewed interest in insufficient effort responding (IER). Previous research has demonstrated that IER can have detrimental effects on survey research ranging from introducing untrustworthy data to influencing psychometric and statistical results. The present simulations examine two forms of IER, straightlining (SL) and random responding (RR), in an attempt to determine whether the presence of these response patterns have differential impacts on data. In three studies, we explore the combined effects of extreme SL and RR, the effects of full and partial RR, and the effects of full and partial SL on scale characteristics such as inter-item correlations, alpha, and component structure. We also explore how various IER response distributions may influence these statistics. Empirical results demonstrate a tendency for SL to increase and RR to decrease the magnitude of inter-item correlations, alpha, and the first component eigenvalue. Results also indicate that the impact of SL may be more pronounced than the impact of RR in the organisational sciences. It is important for researchers to consider the type of IER in addition to the prevalence of IER in a sample prior to conducting statistical analyses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.