2020
DOI: 10.1609/aaai.v34i09.7077
|View full text |Cite
|
Sign up to set email alerts
|

AI for Explaining Decisions in Multi-Agent Environments

Abstract: Explanation is necessary for humans to understand and accept decisions made by an AI system when the system's goal is known. It is even more important when the AI system makes decisions in multi-agent environments where the human does not know the systems' goals since they may depend on other agents' preferences. In such situations, explanations should aim to increase user satisfaction, taking into account the system's decision, the user's and the other agents' preferences, the environment settings and propert… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
24
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(24 citation statements)
references
References 23 publications
0
24
0
Order By: Relevance
“…Human Centered AI institute,(https://hai.stanford.edu/) or the UKRI Trustworthy Autonomous Systems Hub. (http://www.tas.ac.uk/) From a technical perspective, human-centered (Lepri et al, 2021;Shneiderman, 2020;Wilson and Daugherty, 2018) and machine-centered approaches (Rahwan et al, 2019;Awad et al, 2018;Kraus et al, 2020) to the development of artificial intelligence have emerged. Human-centered approaches propose to develop socially beneficial and ethical machine intelligence that also augments human capabilities and undertakes human tasks with high reliability.…”
mentioning
confidence: 99%
“…Human Centered AI institute,(https://hai.stanford.edu/) or the UKRI Trustworthy Autonomous Systems Hub. (http://www.tas.ac.uk/) From a technical perspective, human-centered (Lepri et al, 2021;Shneiderman, 2020;Wilson and Daugherty, 2018) and machine-centered approaches (Rahwan et al, 2019;Awad et al, 2018;Kraus et al, 2020) to the development of artificial intelligence have emerged. Human-centered approaches propose to develop socially beneficial and ethical machine intelligence that also augments human capabilities and undertakes human tasks with high reliability.…”
mentioning
confidence: 99%
“…In our work the explanations do not attempt to explain the output of the system to a passenger but to provide additional information that is likely to increase the user's satisfaction from the system. Therefore, our work can be seen as one of the first instances of x-MASE [29], explainable systems for multi-agent environments.…”
Section: Related Workmentioning
confidence: 99%
“…Such an interaction could be for instance conversational, formalized as dialogue between the user and the system, see e.g. [18,51,57,101,110,121,128,144,146,150,151,186,188,190,204].…”
Section: Actionable Explanationsmentioning
confidence: 99%
“…However, more generally and with respect to the user's satisfaction, Kraus et al stipulate in [121] that there has so far been little explainability in multi-agent environments. They claim explainability in MAS is more challenging than in other settings because "in addition to identifying the technical reasons that led to the decision, there is a need to convey the preferences of the agents that were involved."…”
Section: Multi-agent Systemsmentioning
confidence: 99%