2022
DOI: 10.48550/arxiv.2204.01959
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Data Augmentation for Intent Classification with Off-the-shelf Large Language Models

Abstract: Data augmentation is a widely employed technique to alleviate the problem of data scarcity. In this work, we propose a prompting-based approach to generate labelled training data for intent classification with off-the-shelf language models (LMs) such as GPT-3. An advantage of this method is that no task-specific LM-fine-tuning for data generation is required; hence the method requires no hyper-parameter tuning and is applicable even when the available training data is very scarce. We evaluate the proposed meth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 13 publications
0
10
0
Order By: Relevance
“…PLMs have demonstrated impressive emerging conversational capabilities, enabling big performance improvements in various dialogue tasks (Brown et al, 2020;Shuster et al, 2022;Peng et al, 2022;Kulhánek et al, 2021). Particularly, PLMs have been prompted to augment existing conversational data (Chen et al, 2022;Mehri et al, 2022;Sahu et al, 2022).…”
Section: Triadic Conversations Dyadic Conversationsmentioning
confidence: 99%
See 3 more Smart Citations
“…PLMs have demonstrated impressive emerging conversational capabilities, enabling big performance improvements in various dialogue tasks (Brown et al, 2020;Shuster et al, 2022;Peng et al, 2022;Kulhánek et al, 2021). Particularly, PLMs have been prompted to augment existing conversational data (Chen et al, 2022;Mehri et al, 2022;Sahu et al, 2022).…”
Section: Triadic Conversations Dyadic Conversationsmentioning
confidence: 99%
“…In particular, in-context learning, where few-shot examples are provided in the input prompt of a PLM, has been found to provide valuable information in guiding generation output (Min et al, 2022;Brown et al, 2020;Min et al, 2021;Lu et al, 2021b). As a result, many recent efforts in prompting PLMs have sought to augment various natural language processing datasets (Chen et al, 2022;Sahu et al, 2022;Mehri et al, 2022;Rosenbaum et al, 2022a). Prompting has become a viable "solution" for augmentation in dialogue tasks, which have traditionally been considered challenging due to the difficulty of augmenting dialogue context (Chen et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…PLMs have demonstrated impressive emerging conversational capabilities, enabling big performance improvements in various dialogue tasks (Brown et al, 2020;Shuster et al, 2022;Peng et al, 2022;Kulhánek et al, 2021). Particularly, PLMs have been prompted to augment existing conversational data (Chen et al, 2022;Mehri et al, 2022;Sahu et al, 2022).…”
Section: Triadic Conversations Dyadic Conversationsmentioning
confidence: 99%