2022
DOI: 10.48550/arxiv.2212.08681
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Plansformer: Generating Symbolic Plans using Transformers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Conversely, SayCanPay uses additional models trained with domain-specific knowledge collected from the current environment. There are also efforts to fine-tune LLMs like Code-T5 (Wang et al 2021) to generate plans in PDDL (Pallagani et al 2022). This requires a significant amount of training data (given LLMs' minimal PDDL exposure) which is not entirely justified by their performance.…”
Section: Related Work On Planning With Llmsmentioning
confidence: 99%
See 1 more Smart Citation
“…Conversely, SayCanPay uses additional models trained with domain-specific knowledge collected from the current environment. There are also efforts to fine-tune LLMs like Code-T5 (Wang et al 2021) to generate plans in PDDL (Pallagani et al 2022). This requires a significant amount of training data (given LLMs' minimal PDDL exposure) which is not entirely justified by their performance.…”
Section: Related Work On Planning With Llmsmentioning
confidence: 99%
“…Existing works have demonstrated the planning abilities of both the decoder type (Pallagani et al 2022) and the encoder-decoder type architectures (Valmeekam et al 2023(Valmeekam et al , 2022. Since the generated plan is in free-form language and may contain unrecognizable (for the environment) words or incorrect syntax, it cannot be directly translated into actionable steps in the environment.…”
Section: Experimental Setup Say Modelmentioning
confidence: 99%
“…Efforts to generate multimodal, text, and image-based goalconditioned plans are exemplified by (Lu et al 2023b). Additionally, a subset of studies in this survey investigates the fine-tuning of seq2seq, code-based language models (Pallagani et al 2022(Pallagani et al , 2023b, which are noted for their advanced Application of LLMs in Planning Language Translation ( 23) Xie et al 2023;Guan et al 2023;Chalvatzaki et al 2023;Yang, Ishay, and Lee 2023;Wong et al 2023;Kelly et al 2023;Lin et al 2023c;Sakib and Sun 2023;Yang et al 2023b;Parakh et al 2023;Dai et al 2023;Yang et al 2023a;Shirai et al 2023;Ding et al 2023b;Zelikman et al 2023;Pan et al 2023;Xu et al 2023b;Brohan et al 2023;Yang, Gaglione, and Topcu 2022;Chen et al 2023a;You et al 2023) Plan Generation (53) (Sermanet et al 2023;Li et al 2023b;Pallagani et al 2022;Silver et al 2023;Pallagani et al 2023b;Arora and Kambhampati 2023;Fabiano et al 2023;Chalvatzaki et al 2023;Gu et al 2023;Silver et al 2022;Hao et al 2023a;Lin et al 2023b;Yuan et al 2023b;Gandhi, Sadigh, and Goodman 2023;…”
Section: Plan Generationmentioning
confidence: 99%