2017
DOI: 10.1609/icaps.v27i1.13850
|View full text |Cite
|
Sign up to set email alerts
|

Framer: Planning Models from Natural Language Action Descriptions

Abstract: In this paper, we describe an approach for learning planning domain models directly from natural language (NL) descriptions of activity sequences. The modelling problem has been identified as a bottleneck for the widespread exploitation of various technologies in Artificial Intelligence, including automated planners. There have been great advances in modelling assisting and model generation tools, including a wide range of domain model acquisition tools. However, for modelling tools, there is the underlying a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(6 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…The task we propose is similar to the action-model extraction from text task (Lindsay et al 2017) and narrative actionmodel acquisition task from text task (Hayton et al 2020;Li et al 2024) in which the goal is from natural language to generate the entire domain model from F g and A g if grounded and F , A, and potentially C if lifted. A downside of these tasks is that it very difficult to automatically evaluate performance on, as it requires a full understanding of the natural language text and expert knowledge of PDDL domains.…”
Section: Textual and Narrative Action Model Acquisitionmentioning
confidence: 99%
See 1 more Smart Citation
“…The task we propose is similar to the action-model extraction from text task (Lindsay et al 2017) and narrative actionmodel acquisition task from text task (Hayton et al 2020;Li et al 2024) in which the goal is from natural language to generate the entire domain model from F g and A g if grounded and F , A, and potentially C if lifted. A downside of these tasks is that it very difficult to automatically evaluate performance on, as it requires a full understanding of the natural language text and expert knowledge of PDDL domains.…”
Section: Textual and Narrative Action Model Acquisitionmentioning
confidence: 99%
“…The problem of domain generation from natural language has been studied earlier (Lindsay et al 2017;Hayton et al 2020) and recently Guan et al (2023) also attempted this problem using LLMs. Despite these studies, the task of evaluating the usefulness of the generated domain description is extremely difficult.…”
Section: Introductionmentioning
confidence: 99%
“…In domain model acquisition (DMA) it has been common to assume accurate input data and this has allowed inductive learning approaches to be exploited, e.g., (Cresswell and Gregory 2011). In recent work, researchers have examined noisy data, exploiting cluster-ing (Lindsay et al 2017), machine learning and deep learning (Asai and Fukunaga 2018) as part of their processes. DMA has progressively considered richer target fragments of the PDDL language, from propositional (Wu, Yang, and Jiang 2007;Cresswell and Gregory 2011), including ADL (Zhuo et al 2010); to learning action costs (Gregory and Lindsay 2016) and numeric constraints (Segura-Muros, Pérez, and Fernández-Olivares 2018).…”
Section: Related Workmentioning
confidence: 99%
“…However, the methods proposed so far either generate quite simple and highly specific action models, or rely on human effort to complement or correct automatic extraction. Fully automated approaches have been applied to instructional texts such as recipes, manuals and navigational instructions (Mei, Bansal, and Walter 2016;Lindsay et al 2017;Feng, Zhuo, and Kambhampati 2018;Olmo, Sreedharan, and Kambhampati 2021) (or, in some cases, transcriptions of plans generated from a ground truth domain into text). Such texts, however, lack many of the complexities of narrative texts, which are typically more colloquial and use Input: Bryan hits Jack in the face.…”
Section: Introductionmentioning
confidence: 99%