2023
DOI: 10.48550/arxiv.2302.09865
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Can discrete information extraction prompts generalize across language models?

Abstract: We study whether automatically-induced prompts that effectively extract information from a language model can also be used, out-of-the-box, to probe other language models for the same information. After confirming that discrete prompts induced with the AutoPrompt algorithm outperform manual and semi-manual prompts on the slot-filling task, we demonstrate a drop in performance for Au-toPrompt prompts learned on a model and tested on another. We introduce a way to induce prompts by mixing language models at trai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 16 publications
(28 reference statements)
0
2
0
Order By: Relevance
“…LLMs have already reached state-of-the-art (SoA) performance in various tasks, and selecting an appropriate prompt has a significant impact. As a result, a significant number of related works investigate techniques to improve prompt construction further and their effectiveness [54,81,88], explore their robustness to permutations of the demonstrative examples [107], their sensitivity to negations [37], and their ability to generalize across different LLMs [79].…”
Section: Language Models: From Prompting To In-context Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…LLMs have already reached state-of-the-art (SoA) performance in various tasks, and selecting an appropriate prompt has a significant impact. As a result, a significant number of related works investigate techniques to improve prompt construction further and their effectiveness [54,81,88], explore their robustness to permutations of the demonstrative examples [107], their sensitivity to negations [37], and their ability to generalize across different LLMs [79].…”
Section: Language Models: From Prompting To In-context Learningmentioning
confidence: 99%
“…Rakotonirina et al [79] investigated whether prompts can generalize across different LMs and LLMs focusing on the slot filling NLP task. The authors experimented with manual, semi-manual, and automatic methods for prompt creation.…”
Section: Language Models: From Prompting To In-context Learningmentioning
confidence: 99%