2022
DOI: 10.48550/arxiv.2212.04088
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(15 citation statements)
references
References 0 publications
0
15
0
Order By: Relevance
“…Introspective Tips [17] allows LLM to introspect through the history of the environmental feedback. LLM-Planner [127] introduces a grounded re-planning algorithm that dynamically updates plans generated by LLMs when encountering object mismatches and unattainable plans during task completion. In Progprompt [126], assertions are incorporated into the generated script to provide environment state feedback, allowing for error recovery in case the action's preconditions are not satisfied.…”
Section: Planning Modulementioning
confidence: 99%
“…Introspective Tips [17] allows LLM to introspect through the history of the environmental feedback. LLM-Planner [127] introduces a grounded re-planning algorithm that dynamically updates plans generated by LLMs when encountering object mismatches and unattainable plans during task completion. In Progprompt [126], assertions are incorporated into the generated script to provide environment state feedback, allowing for error recovery in case the action's preconditions are not satisfied.…”
Section: Planning Modulementioning
confidence: 99%
“…Large language models have been explored as an approach to high-level planning [14]- [18] and scene understanding [19], [20]. Vision-language models embedding image features into the same space as text have been applied to open vocabulary object detection [16], [17], natural language maps [15], [17], [21]- [23], and for language-informed navigation [24]- [26].…”
Section: Language Models In Roboticsmentioning
confidence: 99%
“…The prompt instructs the LLM on what actions to generate and only permits using the provided objects when generating the plan. Song et al [32] improve on that general idea by utilizing dynamic in-context example retrieval to enhance performance and enable the LLM to replan in the event of an error. The authors of [29] reduce problem complexity by not giving the LLM all objects but filtering out irrelevant objects by traversing a 3D scene graph via collapsing and expanding notes before letting the LLM output a plan.…”
Section: B Large Language Models In Planningmentioning
confidence: 99%