2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2019
DOI: 10.1109/hri.2019.8673193
|View full text |Cite
|
Sign up to set email alerts
|

Plan Explanations as Model Reconciliation -- An Empirical Study

Abstract: Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models. Existing algorithms in such settings, while having been built on contrastive, selective and social properties of explanations as studied extensively in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
47
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 53 publications
(48 citation statements)
references
References 24 publications
1
47
0
Order By: Relevance
“…These communications can be verbal (explicit) or nonverbal (implicit), as seen in Section "Mental Model Methodologies." For explicit models, the following qualities have been found to be positively correlated with trust and teamwork: taskrelated communications, contrastive explanations expressing model divergence, and user and context-dependent information (such as providing technical information to an expert, and accessible information to a lay-user) [77][78][79]. For implicit models, such as those aimed at plan legibility and explicability, self-reported understanding of a robotic agents' behavior or goal is a common evaluation metric.…”
Section: Evaluation Methodsmentioning
confidence: 99%
“…These communications can be verbal (explicit) or nonverbal (implicit), as seen in Section "Mental Model Methodologies." For explicit models, the following qualities have been found to be positively correlated with trust and teamwork: taskrelated communications, contrastive explanations expressing model divergence, and user and context-dependent information (such as providing technical information to an expert, and accessible information to a lay-user) [77][78][79]. For implicit models, such as those aimed at plan legibility and explicability, self-reported understanding of a robotic agents' behavior or goal is a common evaluation metric.…”
Section: Evaluation Methodsmentioning
confidence: 99%
“…Unless otherwise clarified, it carries implicit commitments to what will happen as the robot continues to operate in this context. Chakraborti et al, for example, have couched explanation as "plan reconciliation" in order for human-robot teams to share an understanding through explanation that enables and presupposes future collaboration [Chakraborti et al 2019].…”
Section: Purpose and Prospective Actionmentioning
confidence: 99%
“…Explanations, from that vantage, can be a means toward improving human-robot performance. It has been couched, for example, as "plan reconciliation," wherein human-robot teams might share an understanding through explanation that enables and presupposes future collaboration [Chakraborti et al 2019]. Hayes et al see the value of explanation in achieving robot controller transparency [Hayes and Shah 2017].…”
Section: Introductionmentioning
confidence: 99%
“…Such explanations are inherently social in being able to explicitly capture the effect of expectations in the explanation process. In user studies conducted in [12], it was shown that participants were indeed able to identify the correct τ based on an explanation. Note that, in the model reconciliation framework, the mental model is just a version of the decision making problem at hand which the agent believes the user is operating under.…”
Section: Model Reconciliationmentioning
confidence: 99%