Prosociality is considered a virtue. Those who care for others are admired, whereas those who care only for themselves are despised. For one's reputation, it pays to be nice. Does it pay to be even nicer? Four experiments assess reputational inferences across the entire range of prosocial outcomes in zero-sum interactions, from completely selfish to completely selfless actions. We observed consistent nonlinear evaluations: Participants evaluated selfish actions more negatively than equitable actions, but they did not evaluate selfless actions markedly more favorably than equitable actions. This asymptotic pattern reflected monotonic evaluations for increasingly selfish actions and insensitivity to increasingly selfless actions. It pays to be nice but not to be really nice. Additional experiments suggest that this pattern stems partly from failing to make spontaneous comparisons between varying degrees of selflessness. We suggest that these reputational incentives could guide social norms, encouraging equitable actions but discouraging extremely selfless actions.
Few biases in human judgment are easier to demonstrate than self-righteousness: the tendency to believe one is more moral than others. Existing research, however, has overlooked an important ambiguity in evaluations of one's own and others' moral behavior that could lead to an overly simplistic characterization of self-righteousness. In particular, moral behavior spans a broad spectrum ranging from doing good to doing bad. Self-righteousness could indicate believing that one is more likely to do good than others, less likely to do bad, or both. Based on cognitive and motivational mechanisms, we predicted an asymmetry in the degree of self-righteousness such that it would be larger when considering unethical actions (doing bad) than when considering ethical actions (doing good). A series of experiments confirmed this prediction. A final experiment suggests that this asymmetry is partly produced by the difference in perspectives that people adopt when evaluating themselves and others (Experiment 8). These results all suggest a bounded sense of self-righteousness. Believing one "less evil than thou" seems more reliable than believing one is "holier than thou." (PsycINFO Database Record
Groups of individuals can sometimes make more accurate judgments than the average individual could make alone. We tested whether this group advantage extends to lie detection, an exceptionally challenging judgment with accuracy rates rarely exceeding chance. In four experiments, we find that groups are consistently more accurate than individuals in distinguishing truths from lies, an effect that comes primarily from an increased ability to correctly identify when a person is lying. These experiments demonstrate that the group advantage in lie detection comes through the process of group discussion, and is not a product of aggregating individual opinions (a "wisdom-of-crowds" effect) or of altering response biases (such as reducing the "truth bias"). Interventions to improve lie detection typically focus on improving individual judgment, a costly and generally ineffective endeavor. Our findings suggest a cheap and simple synergistic approach of enabling group discussion before rendering a judgment.lie detection | group decision-making | social cognition | wisdom of crowds | mind reading D etecting deception is difficult. Accuracy rates in experiments are only slightly greater than chance, even among trained professionals (1-4). This meager accuracy rate appears driven by a modest ability to detect truths rather than lies. In one metaanalysis, individuals accurately identified 61% of truths, but only 47% of lies (5). These results have led researchers to develop costly training programs targeting individual lie detectors to increase accuracy (6-10). We test a different strategy: asking individuals to detect lies as a group.There are three reasons that groups might detect deception better than individuals. First, because individuals have some skill in distinguishing truths from lies, statistically aggregating individual judgments could increase accuracy (a "wisdom-of-crowds" effect) (11,12). If individuals detect truths better than lies, aggregating individual judgments would increase truth detection more than lie detection.Second, individuals show a reliable "truth bias," assuming others are truthful unless given cause for suspicion (5, 13). If groups are less trusting than individuals (14-15), then they could detect lies more accurately because they guess someone is lying more often.Finally, group deliberation could increase accuracy by providing useful information that individuals lack otherwise (16-18). This predicts that group discussion alters how individuals evaluate a given statement to increase accuracy. Because individuals already possess some accuracy in detecting truths, unique improvement from group discussion would increase accuracy in detecting lies.We know of only two inconclusive experiments that test a group advantage in lie detection. In one experiment, participants first made an individual judgment before group discussion, making the independent influence of the subsequent group discussion unclear (17). Although groups were no more accurate than individuals overall, they were marginally better (...
SignificancePeople readily categorize things as good or bad, a welcome adaptation that enables action and reduces information overload. The present research reveals an unforeseen consequence: People do not fully appreciate this immediacy of judgment, instead assuming that they and others will consider more information before forming conclusions than they and others actually do. This discrepancy in perceived versus actual information use reveals a general psychological bias that bears particular relevance in today’s information age. Presumably, one hopes that easy access to abundant information fosters uniformly more-informed opinions and perspectives. The present research suggests mere access is not enough: Even after paying costs to acquire and share ever-more information, people then stop short and do not incorporate it into their judgments.
Moral and immoral behaviors often come in small doses. A person might donate just a few dollars to charity or cheat on just one exam question. Small actions create ambiguity about when they might reflect a permanent change in an actor's moral character versus simply a passing trend. At what sum of good or bad behaviors do observers believe that others have transformed for better or worse, when their actions begin to reflect "them"? Five experiments reveal that this moral tipping point is asymmetric. People require more evidence to perceive improvement than decline; it is apparently easier to become a sinner than a saint, despite exhibiting equivalent evidence for change. This asymmetry emerges more strongly when targets commit new actions (e.g., begin treating others well or poorly) than when targets cease existing actions (stop treating others well or poorly). This asymmetry in moral judgment fosters inequitable thresholds for reward and punishment.
Observing other people improve their lives can be a powerful source of inspiration. Eight experiments explore the power, limits, and reasons for this power of personal change to inspire. We find that people who have improved from undesirable pasts (e.g., people who used to abuse extreme drugs but no longer do) are more inspiring than people who maintain consistently desirable standings (e.g., people who have never used extreme drugs to begin with), because change is perceived as more effortful than stability (Experiments 1a and 1b). The inspirational power of personal change is rooted in people's lack of access to the internal struggles and hard work that many others may endure to successfully remain 'always-good.' Accordingly, giving observers access into the effort underlying others' success in maintaining consistently positive standings restores the inspiring power of being 'always-good' (Experiments 2-4). Finally, change is more inspiring than stability across many domains but one: people who used to harm others but have since reformed (e.g., ex-bullies or ex-cheaters) do not inspire, and in many cases are indeed less inspiring than people who have never harmed others to begin with (Experiments 5-7). Together, these studies reveal how, why, and when one's past influences one's present in the eyes of others: having some "bad" in your past can be surprisingly positive, at least partly because observers assume that becoming "good" is harder than being "good" all along. (PsycINFO Database Record
Change often emerges from a series of small doses. For example, a person may conclude that a happy relationship has eroded not from 1 obvious fight but from smaller unhappy signs that at some point "add up." Everyday fluctuations therefore create ambiguity about when they reflect substantive shifts versus mere noise. Ten studies reveal an asymmetry in this first point when people conclude "official" change: people demand less evidence to diagnose lasting decline than lasting improvement, despite similar evidential quality. This effect was pervasive and replicated across many domains and parameters. For example, a handful of poor grades, bad games, and gained pounds led participants to diagnose intellect, athleticism, and health as "officially" changed; yet corresponding positive signs were dismissed as fickle flukes (Studies 1a, 1b, and 1c). This further manifested in real-time reactions: participants interpreted the same graphs of change in the economy and public health as more meaningful if framed as depicting decline versus improvement (Study 2), and were more likely to gamble actual money on continued bad versus good luck (Study 3). Why? Effects held across self/other change, added/subtracted change, and intended/unintended change (Studies 4a, 4b, and 4c), suggesting a generalized negativity bias. Teasing this apart, we highlight a novel "entropy" component beyond standard accounts like risk aversion: good things seem more truly capable of losing their positive qualities than bad things seem capable of gaining them, rendering signs of decline to appear more immediately diagnostic (Studies 5 and 6). An asymmetric tipping point raises theoretical and practical implications for how people might inequitably react to smaller signs of change. (PsycINFO Database Record
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.