Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.266
|View full text |Cite
|
Sign up to set email alerts
|

Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts

Abstract: Fine-tuning continuous prompts for target tasks has recently emerged as a compact alternative to full model fine-tuning. Motivated by these promising results, we investigate the feasibility of extracting a discrete (textual) interpretation of continuous prompts that is faithful to the problem they solve. In practice, we observe a "wayward" behavior between the task solved by continuous prompts and the nearest neighbor discrete projections of these prompts: One can find continuous prompts that solve a task whil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 20 publications
0
1
0
Order By: Relevance
“…Prompt engineering, the templatized design of such prompts, can improve performance on the downstream task [70], e.g., code generation. According to Liu et al [70], approaches for prompting include appending relevant examples [41], appending a table schema [110], generating mutations of the prompt [64], summarizing complicated prompts [52], adding explanations [54], or various manipulations in the embedding space of the model [1,46,114]. Prompting guidelines for Codex 1 include specifying the language (e.g.…”
Section: Natural Language Programmingmentioning
confidence: 99%
“…Prompt engineering, the templatized design of such prompts, can improve performance on the downstream task [70], e.g., code generation. According to Liu et al [70], approaches for prompting include appending relevant examples [41], appending a table schema [110], generating mutations of the prompt [64], summarizing complicated prompts [52], adding explanations [54], or various manipulations in the embedding space of the model [1,46,114]. Prompting guidelines for Codex 1 include specifying the language (e.g.…”
Section: Natural Language Programmingmentioning
confidence: 99%
“…These benchmarks focus on a single language and single dataset: the 2023) also release attribution annotations for 1.4M summaries, but the labels are machine-generated rather than human-annotated. GENIE (Khashabi et al, 2022) released 17K human evaluations across 5 tasks that includes one English summarization task (XSum).…”
Section: Related Workmentioning
confidence: 99%
“…On the one hand, it brings higher degrees of freedom for prompts, which leads to better results. On the other hand, those prompts become completely unknown to humans (Khashabi et al, 2022).…”
Section: Related Workmentioning
confidence: 99%