2022
DOI: 10.1038/s41598-022-11518-9
|View full text |Cite
|
Sign up to set email alerts
|

Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma

Abstract: Home assistant chat-bots, self-driving cars, drones or automated negotiation systems are some of the several examples of autonomous (artificial) agents that have pervaded our society. These agents enable the automation of multiple tasks, saving time and (human) effort. However, their presence in social settings raises the need for a better understanding of their effect on social interactions and how they may be used to enhance cooperation towards the public good, instead of hindering it. To this end, we presen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
24
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(31 citation statements)
references
References 59 publications
4
24
0
Order By: Relevance
“…Furthermore, we build on the long-standing literature on the threshold public goods game (see Croson and Marks (2000) for a meta-analysis), which is a typical tool to frame fundraising (Andreoni, 1998;Rondeau and List, 2008;Cason and Zubrickas, 2019;Marini et al, 2020), as well as on a related body of research that studies other real-world settings characterized by multiple public goods (Blackwell and McKee, 2003;Bernasconi et al, 2009;Buchan et al, 2011;Catola et al, 2020). Last but not least, our findings also contribute to the literature on (i) equilibrium selection in games, with reference to the conflict between payoff and risk dominance (Harsanyi and Selten, 1988;Schmidt et al, 2003;Broseta et al, 2003;Février and Linnemer, 2006;Gold and Colman, 2020), (ii) delegation, as a mechanism to prevent coordination failure (Hamman et al, 2011;Kocher et al, 2018;Butera and Houser, 2018;Fernández Domingos et al, 2022), and (iii) donors' overhead aversion, which usually emerges when a portion of the donations is intended to cover administrative and fundraising costs (Bowman, 2006;Gneezy et al, 2014;Meer, 2014;Portillo and Stinn, 2018).…”
Section: Introductionsupporting
confidence: 53%
“…Furthermore, we build on the long-standing literature on the threshold public goods game (see Croson and Marks (2000) for a meta-analysis), which is a typical tool to frame fundraising (Andreoni, 1998;Rondeau and List, 2008;Cason and Zubrickas, 2019;Marini et al, 2020), as well as on a related body of research that studies other real-world settings characterized by multiple public goods (Blackwell and McKee, 2003;Bernasconi et al, 2009;Buchan et al, 2011;Catola et al, 2020). Last but not least, our findings also contribute to the literature on (i) equilibrium selection in games, with reference to the conflict between payoff and risk dominance (Harsanyi and Selten, 1988;Schmidt et al, 2003;Broseta et al, 2003;Février and Linnemer, 2006;Gold and Colman, 2020), (ii) delegation, as a mechanism to prevent coordination failure (Hamman et al, 2011;Kocher et al, 2018;Butera and Houser, 2018;Fernández Domingos et al, 2022), and (iii) donors' overhead aversion, which usually emerges when a portion of the donations is intended to cover administrative and fundraising costs (Bowman, 2006;Gneezy et al, 2014;Meer, 2014;Portillo and Stinn, 2018).…”
Section: Introductionsupporting
confidence: 53%
“…We also measured people's reactive attitudes through vignette-based self-reported measures. Another possibility would be to conduct studies where participants interact with real AI systems and demonstrate their reactive attitudes through behavioral measures, e.g., whether or not they would use or cooperate with an AI system after failures (e.g., [30,33,52]). Finally, we studied how people react to AI systems taking on the role of decision-makers.…”
Section: Discussionmentioning
confidence: 99%
“…Currently, there is a lot of interest in the interaction between humans, but the decision processes in humans may be quite different from those we usually use in our models. A new frontier where the decision processes are actually clearer is machine learning and artificial intelligence [ 79 ]. So far, these algorithms are usually trained by fixed training sets which are obtained in a situation where there are no other artificial intelligences.…”
Section: The Futurementioning
confidence: 99%