Trust-based interactions with robots are increasingly common in the marketplace, workplace, on the road, and in the home. However, a looming concern is that people may not trust robots as they do humans. While trust in fellow humans has been studied extensively, little is known about how people extend trust to robots. Here we compare trust-based investments and emotions from across three nearly identical economic games: human-human trust games, human-robot trust games, and human-robot trust games where the robot decision impacts another human. Robots in our experiment mimic humans: they are programmed to make reciprocity decisions based on previously observed behaviors by humans in analogous situations. We find that people invest similarly in humans and robots. By contrast, the social emotions elicited by the interactions (but not non-social emotions) differed across human and robot trust games, and did so lawfully. Emotional reactions depended on how one's trust game decision interacted with the partnered agent's decision, and whether another person was affected economically and emotionally.
Due to the high costs of conflict both in theory and practice, we examine and experimentally test the conditions under which conflict between asymmetric agents can be resolved. We model conflict as a two-agent rent-seeking contest for an indivisible prize. Before conflict arises, both agents may agree to allocate the prize by fair coin flip to avoid the costs of conflict. The model predicts that "parity promotes peace": in the pure-strategy equilibrium, agents with relatively symmetric conflict capabilities agree to resolve the conflict by using a random device; however, with sufficiently asymmetric capabilities, conflicts are unavoidable because the stronger agent prefers to fight. The results of the experiment confirm that the availability of the random device partially eliminates conflicts when agents are relatively symmetric; however, the device also reduces conflict between substantially asymmetric agents.JEL Classifications: C72, C91, D72, D74
We examine subjects' behavior in sender-receiver games where there are gains from trade and alignment of interests in one of the two states. We elicit subjects' beliefs, risk and other-regarding preferences. Our design also allows us to examine the behavior of subjects in both roles, to determine whether the behavior in one role is the best response to the subject's own behavior in the other role. The results of the experiment indicate that, when acting as senders, the majority of subjects adopt deceptive strategies by sending favorable message when the true state of the nature is unfavorable. When acting as receivers, the majority of subjects invest conditional upon receiving a favorable message. The investing behavior of receivers cannot be explained by risk preferences or as a best response to subject's own behavior in the sender's role. However, it can be rationalized by accounting for elicited beliefs and other-regarding preferences. Finally, the honest behavior of some senders can be explained by other-regarding preferences. Thus, we that find liars do believe, and that individuals who care about the payoffs of others tend to be honest.JEL Classifications: C72, C91, D82, D83
Due to the high costs of conflict both in theory and practice, we examine and experimentally test the conditions under which conflict between asymmetric agents can be resolved. We model conflict as a two-agent rent-seeking contest for an indivisible prize. Before conflict arises, both agents may agree to allocate the prize by fair coin flip to avoid the costs of conflict. The model predicts that "parity promotes peace": in the pure-strategy equilibrium, agents with relatively symmetric conflict capabilities agree to resolve the conflict by using a random device; however, with sufficiently asymmetric capabilities, conflicts are unavoidable because the stronger agent prefers to fight. The results of the experiment confirm that the availability of the random device partially eliminates conflicts when agents are relatively symmetric; however, the device also reduces conflict between substantially asymmetric agents.JEL Classifications: C72, C91, D72, D74
Though individuals differ in the degree to which they are predisposed to trust or act trustworthy, we theorize that trust-based behaviors are universally determined by the calibration of conflicting short-and long-sighted behavior regulation programs, and that these programs are calibrated by emotions experienced personally and interpersonally. In this chapter we review both the mainstream and evolutionary theories of emotions that philosophers, psychologists, and behavioral economists have based their work on and which can inform our understanding of trust-based behavior regulation. The standard paradigm for understanding emotions is based on mapping their positive and negative affect valence. While Valence Models often expect that the experience of positive and negative affect is interdependent (leading to the popular use of bipolar affect scales), a multivariate "recalibrational" model based on positive, negative, interpersonal, intrapersonal, shortsighted and long-sighted dimensions predicts and recognizes more complex mixed-valence emotional states. We summarize experimental evidence that supports a model of emotionallycalibrated trust regulation and discuss implications for the use of various emotion measures. Finally, in light of these discussions we suggest future directions for the investigation of emotions and trust psychology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.