We investigate why people keep their promises in the absence of external enforcement mechanisms and reputational e¤ects. In a controlled laboratory experiment we show that exogenous variation of second-order expectations (promisors' expectations about promisees' expectations that the promise will be kept) leads to a signi…cant change in promisor behavior. We provide clean evidence that a promisor's aversion to disappoint a promisee's expectation leads her to keep her promise. We propose a simple theory of lexicographic promise keeping that is supported by our results and nests the …ndings of previous contributions as special cases.Keywords: promises, expectations, beliefs, contracts JEL Classi…cation: A13, C91, D03, C72, D64, K12.We thank Mark Greenberg whose insightful comments in ‡uenced the design of our experiment. We are also grateful to
We investigate why people keep their promises in the absence of external enforcement mechanisms and reputational e¤ects. In a controlled laboratory experiment we show that exogenous variation of second-order expectations (promisors' expectations about promisees'expectations) leads to a signi…cant change in promisor behavior. We provide evidence that a promisor's aversion to disappointing a promisee's expectation leads her to behave more generously. We propose and estimate a simple model of conditional guilt aversion that is supported by our results and nests the …ndings of previous contributions as special cases.
See an invited perspective on this article on page 15. An increasing number of automated and artificial intelligence (AI) systems make medical treatment recommendations, including personalized recommendations, which can deviate from standard care. Legal scholars argue that following such nonstandard treatment recommendations will increase liability in medical malpractice, undermining the use of potentially beneficial medical AI. However, such liability depends in part on lay judgments by jurors: when physicians use AI systems, in which circumstances would jurors hold physicians liable? Methods: To determine potential jurors' judgments of liability, we conducted an online experimental study of a nationally representative sample of 2,000 U.S. adults. Each participant read 1 of 4 scenarios in which an AI system provides a treatment recommendation to a physician. The scenarios varied the AI recommendation (standard or nonstandard care) and the physician's decision (to accept or reject that recommendation). Subsequently, the physician's decision caused harm. Participants then assessed the physician's liability. Results: Our results indicate that physicians who receive advice from an AI system to provide standard care can reduce the risk of liability by accepting, rather than rejecting, that advice, all else being equal. However, when an AI system recommends nonstandard care, there is no similar shielding effect of rejecting that advice and so providing standard care. Conclusion: The tort law system is unlikely to undermine the use of AI precision medicine tools and may even encourage the use of these tools.
Promising serves as an important commitment mechanism by operating on a potential cheater's internal value system. We present experimental evidence on what motivates people to keep their promises. First, they feel that they are duty-bound to keep their promises regardless of whether promisees expect them to (promising per se e¤ect). Second, they care about not disappointing promisees'expectations, regardless of whether those expectations were induced by the promise (expectations per se e¤ect). Third, they are even more motivated to avoid disappointing promisees'expectations when those expectations were induced by a promise (interaction e¤ect). Clear evidence of some of these e¤ects has eluded the prior literature due to limitations inherent to the experimental methods employed. We sidestep those di¢ culties by using a novel between-subject vignette design. Our results also shed light on how promising may contribute to the self-reinforcing creation of trust.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.