2020
DOI: 10.1609/aaai.v34i06.6534
|View full text |Cite
|
Sign up to set email alerts
|

A New Approach to Plan-Space Explanation: Analyzing Plan-Property Dependencies in Oversubscription Planning

Abstract: In many usage scenarios of AI Planning technology, users will want not just a plan π but an explanation of the space of possible plans, justifying π. In particular, in oversubscription planning where not all goals can be achieved, users may ask why a conjunction A of goals is not achieved by π. We propose to answer this kind of question with the goal conjunctions B excluded by A, i. e., that could not be achieved if A were to be enforced. We formalize this approach in terms of plan-property dependencies, where… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(28 citation statements)
references
References 25 publications
0
28
0
Order By: Relevance
“…Note that all these questions can be handled by QP, QW, QC and QU queries in the context of mMAPF. Recently, Eifler et al (2020) has presented a method to generate "contrastive" explanations to questions of the last two forms: " Why not p?" where p is a propositional formula describing an plan property.…”
Section: Discussionmentioning
confidence: 99%
“…Note that all these questions can be handled by QP, QW, QC and QU queries in the context of mMAPF. Recently, Eifler et al (2020) has presented a method to generate "contrastive" explanations to questions of the last two forms: " Why not p?" where p is a propositional formula describing an plan property.…”
Section: Discussionmentioning
confidence: 99%
“…The idea being that an AI system should be able to explain, to some extent, its behaviour to stakeholders. Focusing on the narrower topic dubbed XAIP, as for EXplainable AI Planning, a number of recent works considered the problem of defining what "explainable" means for an automated planning system, explaining and describing generated plans, bridging the gap between machines and human stakeholders, and designing approaches to explain the behaviour of planning systems [6,5,3,22,21,2].…”
Section: A Knowledge Engineering (Historical) Perspectivementioning
confidence: 99%
“…The explanation is then to identify an exemplary plan or policy that satisfies those constraints thus demonstrating how the computed plan is better. Authors in [20], on the other hand, expect the user queries to be expressed in terms of plan properties which are user-defined binary properties that apply to all valid plans for the problem. The explanation then takes the form of other plan properties that are entailed by those properties.…”
Section: Inference Reconciliationmentioning
confidence: 99%
“…Among these works, [41; 19; 37; 20] aim for minimal explanations as a means of selection. Moreover, [37] and [20] could be considered social as they at least specifically try to frame explanations in human understandable terms. Q 4 : "Why is Π not solvable?…”
Section: Inference Reconciliationmentioning
confidence: 99%
See 1 more Smart Citation