2023
DOI: 10.48550/arxiv.2302.14691
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

In-Context Instruction Learning

Abstract: Instruction learning of Large Language Models (LLMs) has enabled zero-shot task generalization. However, instruction learning has been predominantly approached as a fine-tuning problem, including instruction tuning and reinforcement learning from human feedback, where LLMs are multi-task fine-tuned on various tasks with instructions. In this paper, we present a surprising finding that applying in-context learning to instruction learning, referred to as In-Context Instruction Learning (ICIL), significantly impr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…Within the prompt, we also provide two in-context examples [75] that were composed from selected cases in the dataset. Full examples for all prompts in this work can be found in Appendix B.…”
Section: Summary Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…Within the prompt, we also provide two in-context examples [75] that were composed from selected cases in the dataset. Full examples for all prompts in this work can be found in Appendix B.…”
Section: Summary Generationmentioning
confidence: 99%
“…(2) Similar to the previous summarization prompt, we select two cases from the dataset and manually compose the response trajectories to use as few-shot exemplars [75]. Additionally, the two example cases chosen have specifically one Positive and one Negative movement label, in order to avoid any majority label bias [80].…”
Section: Explanationmentioning
confidence: 99%
“…• ChatGPT We utilize the official gpt-3.5-turbo API 6 to perform inference by in-context learning with 3 or 8 demonstrations following Ye et al (2023).…”
Section: Experiments Setupmentioning
confidence: 99%
“…Recently, LLMs like ChatGPT or LLaMA have achieved groundbreaking success (Touvron et al, 2023a,b). With the increasing scale of LLMs, novel techniques such as ICL (Ye et al, 2023) and Chain of Thought (CoT) emerged. ICL demonstrates that adding detailed instructions and examples to task prompts can significantly enhance task performance, whether in zero-shot inference or supervised training.…”
Section: Related Workmentioning
confidence: 99%
“…Trained on extensive, multi-domain corpora, LLMs assimilate a broad spectrum of common sense and domain-agnostic knowledge, equipping them with the ability to discern nuanced differences and linguistic subtleties across various domains Dillion et al, 2023;Yang et al, 2023b;Luo et al, 2024a). Moreover, emerging in-context learning (ICL) techniques demonstrate that task-specific performance can be significantly amplified by simply incorporating concise, task-relevant instructions, demonstrations, and examples into the prompts (Ye et al, 2023;. Although initial research has begun to probe the potential of LLMs and associated techniques in ABSA (Fei et al, 2023;Scaria et al, 2023;Varia et al, 2022), empirical investigations explicitly focusing on multi-domain ABSA are still notably scarce.…”
Section: Introductionmentioning
confidence: 99%