Is there a point within a self-report questionnaire where participants will start responding carelessly? If so, then after how many items do participants reach that point? And what can researchers do to encourage participants to remain careful throughout the entirety of a questionnaire? We conducted two studies (Study 1 N = 358; Study 2 N = 129) to address these questions. Our results found (a) consistent evidence that participants responded more carelessly as they progressed further into a questionnaire, (b) mixed evidence that participants who were warned that carelessness would be punished displayed smaller increases in carelessness, and (c) mixed evidence that increases in carelessness were greater within an unproctored online study (Study 1) than within a proctored laboratory study (Study 2). These findings help address when and why careless responding is likely to occur, and they suggest effective preventive strategies.
Abstract. The current paper reports the results of two randomized experiments designed to test the effects of questionnaire length on careless responding (CR). Both experiments also examined whether the presence of a behavioral consequence (i.e., a reward or a punishment) designed to encourage careful responding buffers the effects of questionnaire length on CR. Collectively, our two studies found (a) some support for the main effect of questionnaire length, (b) consistent support for the main effect of the consequence manipulations, and (c) very limited support for the buffering effect of the consequence manipulations. Because the advancement of many subfields of psychology rests on the availability of high-quality self-report data, further research should examine the causes and prevention of CR.
Open-source software (OSS) is a key aspect of software creation. However, little is known about programmers’ decisions to trust software from OSS websites. The current study emulated OSS websites and manipulated reputation and performance factors in the stimuli according to the heuristic-systematic processing model. We sampled professional programmers—with a minimum experience of three years—from Amazon Mechanical Turk (N = 38). We used a 3 × 3 within-subjects design to investigate the relationship between OSS reputation and performance on users’ time spent on code, the number of interface clicks, trustworthiness perceptions, and willingness to use OSS code. We found that participants spent more time on and clicked the interface more often for code that was high in reputation. Meta-information included with OSS tools was found to affect the degree to which computer programmers interact with and perceive online code repositories. Furthermore, participants reported higher levels of perceived trustworthiness in and trust toward highly reputable OSS code. Notably, we observed fewer significant main effects for the performance manipulation, which may correspond to participants considering performance attributes mainly within the context of reputation-relevant information. That is, the degree to which programmers investigate and then trust OSS code may depend on the initial reputation ratings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.