Prosociality is considered a virtue. Those who care for others are admired, whereas those who care only for themselves are despised. For one's reputation, it pays to be nice. Does it pay to be even nicer? Four experiments assess reputational inferences across the entire range of prosocial outcomes in zero-sum interactions, from completely selfish to completely selfless actions. We observed consistent nonlinear evaluations: Participants evaluated selfish actions more negatively than equitable actions, but they did not evaluate selfless actions markedly more favorably than equitable actions. This asymptotic pattern reflected monotonic evaluations for increasingly selfish actions and insensitivity to increasingly selfless actions. It pays to be nice but not to be really nice. Additional experiments suggest that this pattern stems partly from failing to make spontaneous comparisons between varying degrees of selflessness. We suggest that these reputational incentives could guide social norms, encouraging equitable actions but discouraging extremely selfless actions.
Few biases in human judgment are easier to demonstrate than self-righteousness: the tendency to believe one is more moral than others. Existing research, however, has overlooked an important ambiguity in evaluations of one's own and others' moral behavior that could lead to an overly simplistic characterization of self-righteousness. In particular, moral behavior spans a broad spectrum ranging from doing good to doing bad. Self-righteousness could indicate believing that one is more likely to do good than others, less likely to do bad, or both. Based on cognitive and motivational mechanisms, we predicted an asymmetry in the degree of self-righteousness such that it would be larger when considering unethical actions (doing bad) than when considering ethical actions (doing good). A series of experiments confirmed this prediction. A final experiment suggests that this asymmetry is partly produced by the difference in perspectives that people adopt when evaluating themselves and others (Experiment 8). These results all suggest a bounded sense of self-righteousness. Believing one "less evil than thou" seems more reliable than believing one is "holier than thou." (PsycINFO Database Record
SignificancePeople readily categorize things as good or bad, a welcome adaptation that enables action and reduces information overload. The present research reveals an unforeseen consequence: People do not fully appreciate this immediacy of judgment, instead assuming that they and others will consider more information before forming conclusions than they and others actually do. This discrepancy in perceived versus actual information use reveals a general psychological bias that bears particular relevance in today’s information age. Presumably, one hopes that easy access to abundant information fosters uniformly more-informed opinions and perspectives. The present research suggests mere access is not enough: Even after paying costs to acquire and share ever-more information, people then stop short and do not incorporate it into their judgments.
Groups of individuals can sometimes make more accurate judgments than the average individual could make alone. We tested whether this group advantage extends to lie detection, an exceptionally challenging judgment with accuracy rates rarely exceeding chance. In four experiments, we find that groups are consistently more accurate than individuals in distinguishing truths from lies, an effect that comes primarily from an increased ability to correctly identify when a person is lying. These experiments demonstrate that the group advantage in lie detection comes through the process of group discussion, and is not a product of aggregating individual opinions (a "wisdom-of-crowds" effect) or of altering response biases (such as reducing the "truth bias"). Interventions to improve lie detection typically focus on improving individual judgment, a costly and generally ineffective endeavor. Our findings suggest a cheap and simple synergistic approach of enabling group discussion before rendering a judgment.lie detection | group decision-making | social cognition | wisdom of crowds | mind reading D etecting deception is difficult. Accuracy rates in experiments are only slightly greater than chance, even among trained professionals (1-4). This meager accuracy rate appears driven by a modest ability to detect truths rather than lies. In one metaanalysis, individuals accurately identified 61% of truths, but only 47% of lies (5). These results have led researchers to develop costly training programs targeting individual lie detectors to increase accuracy (6-10). We test a different strategy: asking individuals to detect lies as a group.There are three reasons that groups might detect deception better than individuals. First, because individuals have some skill in distinguishing truths from lies, statistically aggregating individual judgments could increase accuracy (a "wisdom-of-crowds" effect) (11,12). If individuals detect truths better than lies, aggregating individual judgments would increase truth detection more than lie detection.Second, individuals show a reliable "truth bias," assuming others are truthful unless given cause for suspicion (5, 13). If groups are less trusting than individuals (14-15), then they could detect lies more accurately because they guess someone is lying more often.Finally, group deliberation could increase accuracy by providing useful information that individuals lack otherwise (16-18). This predicts that group discussion alters how individuals evaluate a given statement to increase accuracy. Because individuals already possess some accuracy in detecting truths, unique improvement from group discussion would increase accuracy in detecting lies.We know of only two inconclusive experiments that test a group advantage in lie detection. In one experiment, participants first made an individual judgment before group discussion, making the independent influence of the subsequent group discussion unclear (17). Although groups were no more accurate than individuals overall, they were marginally better (...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.