2022
DOI: 10.1613/jair.1.13073
|View full text |Cite
|
Sign up to set email alerts
|

A Comprehensive Framework for Learning Declarative Action Models

Abstract: A declarative action model is a compact representation of the state transitions of dynamic systems that generalizes over world objects. The specification of declarative action models is often a complex hand-crafted task. In this paper we formulate declarative action models via state constraints, and present the learning of such models as a combinatorial search. The comprehensive framework presented here allows us to connect the learning of declarative action models to well-known problem solving tasks. In addit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 55 publications
0
2
0
Order By: Relevance
“…We showed however that such setting is still challenging at domains where the number of state variables that are checked, or updated, is unbounded. We plan to extend our structured programming approach to address more challenging settings that consider action discovery [52,51], partial observability [3], noisy examples [37], or non-deterministic models [40], as it has been done in the automated planning [28,4,2] and the ILP literature [42].…”
Section: Discussionmentioning
confidence: 99%
“…We showed however that such setting is still challenging at domains where the number of state variables that are checked, or updated, is unbounded. We plan to extend our structured programming approach to address more challenging settings that consider action discovery [52,51], partial observability [3], noisy examples [37], or non-deterministic models [40], as it has been done in the automated planning [28,4,2] and the ILP literature [42].…”
Section: Discussionmentioning
confidence: 99%
“…Recent works have also considered learning abstractions for multi-level planning, like those in the task and motion planning (TAMP) (Gravot, Cambon, and Alami 2005;Garrett et al 2021) and hierarchical planning (Bercher, Alford, and Höller 2019) literature. Some of these efforts consider learning symbolic action abstractions (Zhuo et al 2009;Nguyen et al 2017;Silver et al 2021;Aineto, Jiménez, and Onaindia 2022) or refinement strategies (Chitnis et al 2016;Mandalika et al 2019;Chitnis, Kaelbling, and Lozano-Pérez 2019;Wang et al 2021;Chitnis et al 2022;Ortiz-Haro et al 2022); our operator and sampler learning methods take inspiration from these prior works. Recent efforts by Loula et al (2019) and Curtis et al (2021) consider learning both state and action abstractions for TAMP, like we do (Loula et al 2019(Loula et al , 2020Curtis et al 2021).…”
Section: Related Workmentioning
confidence: 99%