Findings of the Association for Computational Linguistics: ACL 2022 2022
DOI: 10.18653/v1/2022.findings-acl.50
|View full text |Cite
|
Sign up to set email alerts
|

Reframing Instructional Prompts to GPTk’s Language

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
52
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 47 publications
(53 citation statements)
references
References 18 publications
1
52
0
Order By: Relevance
“…FLAN and T0 (Sanh et al, 2021) of tasks and achieving zero-shot generalization on target unseen tasks. Task reframing (Mishra et al, 2021a) proposed several guidelines to reframe task instructions to improve model response to follow instructions. Analysis introduced to understand in-context learning better on a large set of training tasks (Min et al, 2021(Min et al, , 2022.…”
Section: Related Workmentioning
confidence: 99%
“…FLAN and T0 (Sanh et al, 2021) of tasks and achieving zero-shot generalization on target unseen tasks. Task reframing (Mishra et al, 2021a) proposed several guidelines to reframe task instructions to improve model response to follow instructions. Analysis introduced to understand in-context learning better on a large set of training tasks (Min et al, 2021(Min et al, , 2022.…”
Section: Related Workmentioning
confidence: 99%
“…The release of GPT-3 (Brown et al, 2020) initiated a body of work on the emergent ability of LMs to follow discrete natural language prompts. Consequently, several followup studies have used manually-designed discrete prompts for probing LMs (Petroni et al, 2019;Jiang et al, 2020), improving LMs' few-shot ability (Schick and Schütze, 2021;Gao et al, 2021;Le Scao and Rush, 2021), and their zero-shot ability as well as transferability (Mishra et al, 2022a;Reynolds and McDonell, 2021). Most importantly, discrete prompts have the advantages of being human-readable and thus easily interpretable though we do not have efficient and algorithmic ways of reconstructing them.…”
Section: Related Workmentioning
confidence: 99%
“…For example, Shin et al (2020)'s algorithm discovers discrete prompts, yet the results are not human readable. Prior work also finds that model performance is highly sensitive to small changes in wordings (Mishra et al, 2022a) and that optimization over the discrete prompt space is non-trivial and often highly unstable. Our findings here about the disconnect between continuous prompts and their discrete interpretation provides another perspective on the difficulty of discovering discrete prompts via continuous optimization algorithms that (directly or indirectly) leverage the continuous space (more discussion in §6).…”
Section: Related Workmentioning
confidence: 99%
“…PROMPTING) proposed by GPT (Radford et al, 2019;Brown et al, 2020), which directly do prediction via PLM without any training. With well-designed prompts, large-scale PLMs have shown its notable zero-shot learning ability in various natural language processing tasks (Jiang et al, 2020a;Shin et al, 2020;Reynolds and Mc-Donell, 2021;Mishra et al, 2021).…”
Section: Zero-shot Learning With Plmmentioning
confidence: 99%