2023
DOI: 10.48550/arxiv.2302.12170
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Language Model Crossover: Variation through Few-Shot Prompting

Abstract: This paper pursues the insight that language models naturally enable an intelligent variation operator similar in spirit to evolutionary crossover. In particular, language models of sufficient scale demonstrate in-context learning, i.e. they can learn from associations between a small number of input patterns to generate outputs incorporating such associations (also called few-shot prompting). This ability can be leveraged to form a simple but powerful variation operator, i.e. to prompt a language model with a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 62 publications
0
0
0
Order By: Relevance
“…In this approach, the GenAI tools takes this text as input and generates the answer or ranks different options [20] in contrast to the zero-shot prompting, which involves text generation without training on the specific task [21]. This method can be more scalable and sometimes requires relatively few input data [22]. Although this method can be considered a common way to help genAI models understand and cope with a task, even this method can be further improved.…”
Section: Prompt Engineeringmentioning
confidence: 99%
“…In this approach, the GenAI tools takes this text as input and generates the answer or ranks different options [20] in contrast to the zero-shot prompting, which involves text generation without training on the specific task [21]. This method can be more scalable and sometimes requires relatively few input data [22]. Although this method can be considered a common way to help genAI models understand and cope with a task, even this method can be further improved.…”
Section: Prompt Engineeringmentioning
confidence: 99%
“…It is shown that the in-context model would imitate the prompting examples to generate similar sequences (Meyerson et al, 2023). One principled and popular explanation is that such a model is an approximation of the posterior predictive distribution in Bayesian statistics (Xie et al, 2021;:…”
Section: A Bayesian Viewmentioning
confidence: 99%