2022
DOI: 10.48550/arxiv.2205.10625
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

Abstract: We propose a novel prompting strategy, least-to-most prompting, that enables large language models to better perform multi-step reasoning tasks. Least-to-most prompting first reduces a complex problem into a list of subproblems, and then sequentially solves the subproblems, whereby solving a given subproblem is facilitated by the model's answers to previously solved subproblems. Experiments on symbolic manipulation, compositional generalization and numerical reasoning demonstrate that least-to-most prompting c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 46 publications
(64 citation statements)
references
References 13 publications
0
42
0
Order By: Relevance
“…Further, while the core of RTNet does not include recurrency, the evidence accumulation system can be thought of as a recurrent network. In fact, recent studies demonstrated the advantages of combining a standard feedforward network with a recurrent network in performing a range of tasks and extrapolating to solve problems of greater complexity than they were trained on 47,48 . Thus, while RTNet remains less biologically plausible than a true recurrent network, it is as biologically plausible as current methods of training neural networks permit.…”
Section: Biological Plausibility Of Rtnet and Anytime Predictionmentioning
confidence: 99%
“…Further, while the core of RTNet does not include recurrency, the evidence accumulation system can be thought of as a recurrent network. In fact, recent studies demonstrated the advantages of combining a standard feedforward network with a recurrent network in performing a range of tasks and extrapolating to solve problems of greater complexity than they were trained on 47,48 . Thus, while RTNet remains less biologically plausible than a true recurrent network, it is as biologically plausible as current methods of training neural networks permit.…”
Section: Biological Plausibility Of Rtnet and Anytime Predictionmentioning
confidence: 99%
“…Large language models exhibit impressive zero-shot reasoning capabilities: from planning [14] to writing math programs [43]; from solving science problems [44] to using trained verifiers [45] for math word problems. These can be improved with prompting methods such as Least-to-Most [46], Think-Step-by-Step [15] or Chain-of-Thought [47]. Most closely related to this paper are works that use LLM capabilities for robot agents without additional model training.…”
Section: Perception Apis Control Apismentioning
confidence: 99%
“…Prior work has also shown that by using specific prompting language such as "Let's think step by step", one can solicit reasoning from the model to perform tasks that require logical reasoning, such as solving math problems [23], in a zero-shot setting. In addition, various prompting paradigms have been proposed to solicit reasoning from the language model [47,49,54]. For example, chain-of-thought prompting [49] proposes to use the models to generate intermediate results (i.e., chain of thoughts) before generating the final output.…”
Section: Prompting Pre-trained Large Language Modelsmentioning
confidence: 99%
“…Pre-trained LLMs support in-context few-shot learning via prompting-instead of finetuning or re-training models for each new task, one can prompt an LLM with a few input and output exemplars of the desired task [6,9,49,54]. For some NLP tasks such as questionanswering or translation, prompting can perform on par with previous benchmark approaches [6].…”
Section: Prompting Large-language Models For Mobile Ui Tasksmentioning
confidence: 99%