The self-concept maintenance theory holds that many people will cheat in order to maximize self-profit, but only to the extent that they can do so while maintaining a positive self-concept. Mazar, Amir, and Ariely (2008, Experiment 1) gave participants an opportunity and incentive to cheat on a problem-solving task. Prior to that task, participants either recalled the Ten Commandments (a moral reminder) or recalled 10 books they had read in high school (a neutral task). Results were consistent with the self-concept maintenance theory. When given the opportunity to cheat, participants given the moral-reminder priming task reported solving 1.45 fewer matrices than did those given a neutral prime (Cohen's d = 0.48); moral reminders reduced cheating. Mazar et al.'s article is among the most cited in deception research, but their Experiment 1 has not been replicated directly. This Registered Replication Report describes the aggregated result of 25 direct replications (total N = 5,786), all of which followed the same preregistered protocol. In the primary meta-analysis (19 replications, total n = 4,674), participants who were given an opportunity
Srull and Wyer (1979) demonstrated that exposing participants to more hostility-related stimuli caused them subsequently to interpret ambiguous behaviors as more hostile. In their Experiment 1, participants descrambled sets of words to form sentences. In one condition, 80% of the descrambled sentences described hostile behaviors, and in another condition, 20% described hostile behaviors. Following the descrambling task, all participants read a vignette about a man named Donald who behaved in an ambiguously hostile manner and then rated him on a set of personality traits. Next, participants rated the hostility of various ambiguously hostile behaviors (all ratings on scales from 0 to 10). Participants who descrambled mostly hostile sentences rated Donald and the ambiguous behaviors as approximately 3 scale points more hostile than did those who descrambled mostly neutral sentences. This Registered Replication Report describes the results of 26 independent replications (N = 7,373 in the total sample; k = 22 labs and N = 5,610 in the
We examined forced choice memory performance testing in deception detection from a theoretical perspective. Evidence suggests that participants form different strategies to defeat this test. We attempted to describe these strategies within the framework of Cognitive Hierarchy Theory, a theory that distinguishes strategies based on their degree of anticipation of opponents' strategies. Additionally, we explored whether the strategy selection process is malleable. Truth tellers and liars were subjected to a forced choice memory test about a mock crime. Additionally, half of the sample was subjected to a misdirection changing the appearance of the test to that of a polygraph examination. We found detection accuracies and strategies similar to previous experiments and our misdirection manipulation elicited new strategies and behaviour. Theoretical and practical applications are discussed.
The present experiment investigates similarities in participants' nonverbal and verbal behaviours when responding to baseline and investigative questions, comparing two different types of baselines. Police literature suggests to obtain a baseline through small talk, whereas academic literature underlines the importance of baseline and investigative themes to be comparable. First, a baseline was obtained (either small talk or comparable), then the investigative questioning started. During the investigative questioning, participants either truthfully reported a set of actions they had actually performed or lied about them. Findings revealed that truth tellers and liars in the small talk condition did not differ in their level of similarity when responding to the baseline and investigative questions. In the comparable truth condition, levels of verbal similarity between the baseline and investigative questions were higher for truth tellers than for liars, but only for one variable: spatial detail. Results therefore showed that a small talk baseline should not be used to assess interviewees' credibility, and that a comparable truth baseline, although better than a small talk baseline, is still problematic.
In forced-choice tests (FCTs), examinees are typically presented with questions with two equally plausible answer alternatives, of which only one is correct. The rationale underlying this test is that guilty examinees tend to avoid relevant crime information, producing a nonrandom response pattern. The validity of FCTs is reduced when examinees are informed about this underlying rationale, with coached guilty examinees refraining from avoiding the correct information but trying to provide a random mix of correct and incorrect answers. To detect such intentional randomization, a "runs" test-looking at the distribution of the number of alternations between correct and incorrect answers-has been suggested but with limited success. We designed a runs test based on distinguishing between patterns that look random and patterns that are random. Specifically, we alternated the horizontal presentation (i.e., presentation left or right on the screen) of the correct answer alternative between each trial.As a consequence, guilty examinees were faced with having to choose to randomize either between correct and incorrect answers-leading to chance performance-or between answers presented on the left or right, producing a pattern that "looks" random. As innocent examinees are unaware of the correct answers, they can only randomize between horizontal positions. Results showed that the number of correct items selected distinguished guilty from innocent examinees only when they were not informed about the underlying rationale. In contrast, alternations between correct and incorrect answers did distinguish informed guilty from innocent examinees. Incremental validity of the alternation criterion and theoretical implications are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.