2017
DOI: 10.1007/s10458-017-9376-6
|View full text |Cite
|
Sign up to set email alerts
|

Social decisions and fairness change when people’s interests are represented by autonomous agents

Abstract: In the realms of AI and science fiction, agents are fully-autonomous systems that can be perceived as acting of their own volition to achieve their own goals. But in the real world, the term "agent" more commonly refers to a person that serves as a representative for a human client and works to achieve this client's goals (e.g., lawyers and real estate agents). Yet, until the day that computers become fully autonomous, agents in the first sense are really agents in the second sense as well: computer agents tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
20
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 27 publications
(34 citation statements)
references
References 77 publications
1
20
0
Order By: Relevance
“…For each combination of M = {1, N − 1} with N = {5, 10} we collected 100 pairs of strategies (p, q). We followed the strategy method (de Melo, Marsella, and Gratch 2018), informing people about the game rules, group size and group decision rule, and asking participants to submit their proposal as a Proposer (p) and the minimum acceptable offer as a Responder (q). We provide an eliminatory test, used to guarantee that individuals actually understood the rules of the MUG.…”
Section: Behavioral Experimentsmentioning
confidence: 99%
“…For each combination of M = {1, N − 1} with N = {5, 10} we collected 100 pairs of strategies (p, q). We followed the strategy method (de Melo, Marsella, and Gratch 2018), informing people about the game rules, group size and group decision rule, and asking participants to submit their proposal as a Proposer (p) and the minimum acceptable offer as a Responder (q). We provide an eliminatory test, used to guarantee that individuals actually understood the rules of the MUG.…”
Section: Behavioral Experimentsmentioning
confidence: 99%
“…These machines are changing the traditional ways we engage with others, and it is important to understand those changes. Recent research confirms that interacting through machines can affect the decisions we make with others (5). These findings showed that in ultimatum, impunity, and negotiation games, people were less likely to accept unfair offers if asked to program a machine to act on their behalf compared with direct interaction with proposers.…”
mentioning
confidence: 69%
“…However, to increase experimental control, participants always engaged with a computer script that simulated the other participants. Similar experimental manipulations have been used in other experiments studying behavior involving intelligent machines (5,38,39). Participants were fully debriefed about this experimental procedure at the end of the experiment.…”
Section: Overview Of Experimentsmentioning
confidence: 99%
“…There is some evidence that individuals instruct their representatives to perform more fairly than they themselves would due to reputation (Ramchurn et al, 2003) or temporal effects (Pronin et al, 2008)due to a desire to be perceived by others as good or fair-but these effects persist even into anonymized scenarios (de Melo et al, 2016). Some theories maintain that, when considering what instructions to provide to one's representative, principals engage in higher level thinking about fairness and equity and other broad social goals than they might otherwise do in the heat of the moment (de Melo et al, 2018;Giacomantonio et al, 2010). Indeed, this may be the goal of some people who value indirect interactions-allowing "cooler heads to prevail" can often have direct benefits for all parties.…”
Section: Human-agent Negotiation and Representationmentioning
confidence: 99%
“…In a recent provocative paper in the Academy of Sciences, de Melo and colleagues examined the norm of fairness and showed that people treated each other more fairly when they interacted through AI systems (de Melo et al, 2018). Using several domains, including negotiation, they found that when people "programmed" the behavior of an agent to act on their behalf, the agent treated others more fairly than they would have themselves.…”
Section: Introductionmentioning
confidence: 99%