2013 IEEE 25th International Conference on Tools With Artificial Intelligence 2013
DOI: 10.1109/ictai.2013.23
|View full text |Cite
|
Sign up to set email alerts
|

Fast Strong Planning for FOND Problems with Multi-root Directed Acyclic Graphs

Abstract: Abstract-We present a planner for addressing a difficult, yet under-investigated class of planning problems: Fully Observable Non-Deterministic planning problems with strong solutions. Our strong planner employs a new data structure, MRDAG (multi-root directed acyclic graph), to define how the solution space should be expanded. We further equip a MRDAG with heuristics to ensure planning towards the relevant search direction. We performed extensive experiments to evaluate MRDAG and the heuristics. Results show … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2015
2015
2015
2015

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…Fu et al [58,59] propose a FOND planner that produces strong plans based on a multi-root directed acyclic graphs heuristic (MRDAGs); the MRDAGs are used to define the expansion of the search space by distinguishing between states with one or more actions. This is important as backtracking is essential in strong planning in order to avoid cycles, and whenever one is encountered, backtracking continues until it reaches a state with more than one applicable actions.…”
Section: Pc-shopmentioning
confidence: 99%
See 1 more Smart Citation
“…Fu et al [58,59] propose a FOND planner that produces strong plans based on a multi-root directed acyclic graphs heuristic (MRDAGs); the MRDAGs are used to define the expansion of the search space by distinguishing between states with one or more actions. This is important as backtracking is essential in strong planning in order to avoid cycles, and whenever one is encountered, backtracking continues until it reaches a state with more than one applicable actions.…”
Section: Pc-shopmentioning
confidence: 99%
“…[23] The proposed method was evaluated against Gamer and MBP, on problems from the 2008 IPC. [4] Gamer outperforms MBP considerably, however the approach in Fu et al [58] is orders of magnitude faster than both, exhibits significantly better scalability, and produces solutions of approximately the same length. The results also indicate that LHD is far more beneficial than MCS, i.e., the order in which the states are expanded in an MRDAG is not crucial to planning efficiency.…”
Section: Pc-shopmentioning
confidence: 99%