2022
DOI: 10.48550/arxiv.2205.10782
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Instruction Induction: From Few Examples to Natural Language Task Descriptions

Abstract: Large language models are able to perform a task by conditioning on a few inputoutput demonstrations -a paradigm known as in-context learning. We show that language models can explicitly infer an underlying task from a few demonstrations by prompting them to generate a natural language instruction that fits the examples. To explore this ability, we introduce the instruction induction challenge, compile a dataset consisting of 24 tasks, and define a novel evaluation metric based on executing the generated instr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(15 citation statements)
references
References 10 publications
(12 reference statements)
0
15
0
Order By: Relevance
“…for guiding the LLM to generate a qualified answer. Recent work [9] suggests that we can take advantage of the LLM itself to design instructions for in-context learning instead of human handcrafting. We use several high-quality examples to induce a few possible instructions (Cf.…”
Section: In-context Learning Inferencementioning
confidence: 99%
See 1 more Smart Citation
“…for guiding the LLM to generate a qualified answer. Recent work [9] suggests that we can take advantage of the LLM itself to design instructions for in-context learning instead of human handcrafting. We use several high-quality examples to induce a few possible instructions (Cf.…”
Section: In-context Learning Inferencementioning
confidence: 99%
“…However, these instructions are too verbose, and in the presence of examples, the model's performance is not significantly impacted by the instructions. Therefore, we adopt a more natural approach to generate instructions [9] to produce a natural instruction that is interpretable by the model.…”
Section: Choice Of Prompts and Instructionsmentioning
confidence: 99%
“…In the era of LLMs, searching directly in the natural language hypothesis is more popular. Honovich et al (2022) found LLMs can generate the task instruction with several demonstrations. automatically generates many instructions for the given demonstrations and selects the one with the maximum score function.…”
Section: Prompt Engineeringmentioning
confidence: 99%
“…Instruction Generation by Large Language Models. Some recent works (Honovich et al, 2022; datasets. While our training procedure learns to generate explanations for datasets akin to these prior works, our primary objective is to explain classifiers to understand their classification rationale rather than datasets.…”
Section: B Extended Related Workmentioning
confidence: 99%