Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/671
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations

Abstract: There is a growing interest within the AI research community in developing autonomous systems capable of explaining their behavior to users. However, the problem of computing explanations for users of different levels of expertise has received little research attention. We propose an approach for addressing this problem by representing the user's understanding of the task as an abstraction of the domain model that the planner uses. We present algorithms for generating minimal explanations in cases where this a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 40 publications
(39 citation statements)
references
References 1 publication
0
39
0
Order By: Relevance
“…Depending on the choice of explanatory foil, different answers are appropriate [8]. Sreedharan et al describe an algorithm for generating the minimal explanation that patches a user's partial understanding of a domain [37]. Work on mixed-initiative planning [7] has demonstrated the importance of supporting interactive dialog with a planning system.…”
Section: Explaining Combinatorial Searchmentioning
confidence: 99%
“…Depending on the choice of explanatory foil, different answers are appropriate [8]. Sreedharan et al describe an algorithm for generating the minimal explanation that patches a user's partial understanding of a domain [37]. Work on mixed-initiative planning [7] has demonstrated the importance of supporting interactive dialog with a planning system.…”
Section: Explaining Combinatorial Searchmentioning
confidence: 99%
“…The problem of obtaining contrastive explanations is designed as a Bayesian inference problem, with the posterior distribution to be maximized defined as the probability of a contrastive explanation given a set of positive and negative plan traces. Conversely, Sreedharan et al consider the task of automatic analysis of counterfactual explanations in their ''Hierarchical Expertise-Level Modeling'' framework [156]. A robot provides a user with a plan for the next action to take.…”
Section: ) Explainability Methodsmentioning
confidence: 99%
“…Contfactual explanation generation appears highly relevant to sequential tasks in robotics such as automatic planning [94], [157], [161]. Moreover, some of the robotics-related frameworks found in reinforcement learning settings provide explanations for policies that a robot selects at a given time step [132], [156].…”
Section: ) Ai Problemmentioning
confidence: 99%
“…Contrastive. The contrastive nature of these explanations comes from how the model update preserves τ of the given plan as opposed to the foil, which may be implicitly [15] or explicitly [72] provided. This is also closely tied with the selection process of those model updates.…”
Section: Model Reconciliationmentioning
confidence: 99%
“…In [71], authors show how to reconcile with a set of possible mental models {Π H i } and also demonstrate how the same framework can be used to explain to multiple users in the loop. In [72], on the other hand, the authors estimate the mental model from the provided foil.…”
Section: Model Reconciliation Expansion Packmentioning
confidence: 99%