2021
DOI: 10.48550/arxiv.2109.08754
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Semi-Supervised Few-Shot Intent Classification and Slot Filling

Samyadeep Basu,
Karine lp Kiun Chong,
Amr Sharaf
et al.

Abstract: Intent classification (IC) and slot filling (SF) are two fundamental tasks in modern Natural Language Understanding (NLU) systems. Collecting and annotating large amounts of data to train deep learning models for such systems is not scalable. This problem can be addressed by learning from few examples using fast supervised meta-learning techniques such as prototypical networks. In this work, we systematically investigate how contrastive learning and unsupervised data augmentation methods can benefit these exis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 10 publications
0
0
0
Order By: Relevance
“…Indeed, research most closely realted to the present work is the Slot-List model by Basu et al [2021], which focuses on the meta-learning aspect of semi-supervised learning rather than using unlabeled data. In a similar vein the GAN-BERT [Croce et al, 2020] model shows that using an adversarial learning regime can be devised to ensure that the extracted BERT features are similar amongst the unlabeled and the labeled data sets and substantially boost classification performance.…”
Section: Balanced Decision Boundarymentioning
confidence: 99%
“…Indeed, research most closely realted to the present work is the Slot-List model by Basu et al [2021], which focuses on the meta-learning aspect of semi-supervised learning rather than using unlabeled data. In a similar vein the GAN-BERT [Croce et al, 2020] model shows that using an adversarial learning regime can be devised to ensure that the extracted BERT features are similar amongst the unlabeled and the labeled data sets and substantially boost classification performance.…”
Section: Balanced Decision Boundarymentioning
confidence: 99%