Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/23
|View full text |Cite
|
Sign up to set email alerts
|

Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy

Abstract: When AI systems interact with humans in the loop, they are often called on to provide explanations for their plans and behavior. Past work on plan explanations primarily involved the AI system explaining the correctness of its plan and the rationale for its decision in terms of its own model. Such soliloquy is wholly inadequate in most realistic scenarios where the humans have domain and task models that differ significantly from that used by the AI system. We posit that the explanations are best studied in li… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
170
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 155 publications
(170 citation statements)
references
References 0 publications
0
170
0
Order By: Relevance
“…Thus, going forward, the objective function should incorporate the cost or difficulty of analyzing the plans and explanations from the point of view of the human in addition to the current costs of explicability and explanations (as shown in Table 1) modeled from the perspective of the robot model (refer to [29] for more details). Table 1 shows the statistics of the explanations / plans from 124 problem instances that required minimal explanations as per [5], and 25 and 40 instances that contained balanced and explicable plans respectively, as before. As desired, the robot gains in length of explanations but loses out in cost of plans produced as it progresses along the spectrum of optimal to explicable plans.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Thus, going forward, the objective function should incorporate the cost or difficulty of analyzing the plans and explanations from the point of view of the human in addition to the current costs of explicability and explanations (as shown in Table 1) modeled from the perspective of the robot model (refer to [29] for more details). Table 1 shows the statistics of the explanations / plans from 124 problem instances that required minimal explanations as per [5], and 25 and 40 instances that contained balanced and explicable plans respectively, as before. As desired, the robot gains in length of explanations but loses out in cost of plans produced as it progresses along the spectrum of optimal to explicable plans.…”
Section: Resultsmentioning
confidence: 99%
“…This seems to be a viable approach to further reduce size (c.f. selective property of explanations in [22]) of explanations in a post-hoc setting, and is out of scope of explanations developed in [5].…”
Section: Post-hoc Explanationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Table I shows the ratios (refer to as the explicability ratio) between the number of explicable actions and the number of actions over all plans, created for the testing problems using our approach, FF planner, and human plan, respectively. The interactive explicable plan (our approach) is created using the heuristic search method mentioned in Equation (4). Note that all the human actions will be considered explicable in our plans (although one can argue that is not the case).…”
Section: A Experimental Setupmentioning
confidence: 99%
“…The challenge is how the robot can utilize this information to synthesize a plan while avoiding conflicts or providing proactive assistance [2,5]. There are different approaches to planning with such consideration [1,4]. Another the key consideration is to be socially acceptable [8,15], where the robot must be aware of expectation of the human teammates and acts accordingly.…”
Section: Introductionmentioning
confidence: 99%