2015
DOI: 10.1007/s10458-015-9297-1
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous agents and human cultures in the trust–revenge game

Abstract: Autonomous agents developed by experts are embedded with the capability to interact well with people from different cultures. When designing expert agents intended to interact with autonomous agents developed by Non Game Theory Agents (NGTE), it is beneficial to obtain insights on the behavior of these NGTE agents. Is the behavior of these NGTE agents similar to human behavior from different cultures? This is an important question as such a quality would allow an expert agent interacting with NGTE agents to mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 34 publications
(42 reference statements)
0
5
0
Order By: Relevance
“…On the one hand, in negotiation -which can be interpreted as a more complex version of the ultimatum game [4,53] -we showed that participants were also likely to be more demanding with unfair counterparts when interacting via agents than when engaging with them directly. On the other hand, in the trust-revenge game, Azaria, Richardson, and Rosenfeld [83] report no difference in trusting and a slight increase in revenge behavior when programming agents, when compared to direct interaction with others. Though the revenge portion of this task may seem similar to an ultimatum game, it is important to note that here it reflects a breach in trust, whereas in the ultimatum game there is no initial allocation of trust.…”
Section: Theoretical Implicationsmentioning
confidence: 96%
“…On the one hand, in negotiation -which can be interpreted as a more complex version of the ultimatum game [4,53] -we showed that participants were also likely to be more demanding with unfair counterparts when interacting via agents than when engaging with them directly. On the other hand, in the trust-revenge game, Azaria, Richardson, and Rosenfeld [83] report no difference in trusting and a slight increase in revenge behavior when programming agents, when compared to direct interaction with others. Though the revenge portion of this task may seem similar to an ultimatum game, it is important to note that here it reflects a breach in trust, whereas in the ultimatum game there is no initial allocation of trust.…”
Section: Theoretical Implicationsmentioning
confidence: 96%
“…Conducting human experiments is crucial for testing predictions from theoretical models and gaining insights into various aspects, including psychological effects, emotions, and cultural differences. 20 , 26 Mechanisms such as communication sentiment, reward, and punishment in human-human interactions have provided clear evidence of prosocial behavior. 49 , 50 , 51 Empirical experiments not only test theoretical possibilities but also reveal what actually occurs.…”
Section: Discussionmentioning
confidence: 99%
“… 18 For instance, studying human-AI interaction within the general-sum environment and the trust-revenge game provides a comprehensive understanding of these domains. 19 , 20 As social interactions have become more hybrid, 16 , 21 involving humans and AAs, there lies an opportunity to gain new insights into how human cooperation is affected. 18 , 22 This work aims to examine the influence of AAs on human cooperative behavior when social dilemmas exist.…”
Section: Introductionmentioning
confidence: 99%
“…The rational behavior in the trust game is for the trustee not to return any money to the investor, and thus, for the investor not to pass any money to the trustee [5,8]. However, in practice, human investors invest around half their money, and the trustees return more than they have received [21].…”
Section: Appendixmentioning
confidence: 99%