Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-short.110
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Discriminative Representations for Out-of-Domain Detection with Supervised Contrastive Learning

Abstract: Detecting Out-of-Domain (OOD) or unknown intents from user queries is essential in a taskoriented dialog system. A key challenge of OOD detection is to learn discriminative semantic features. Traditional cross-entropy loss only focuses on whether a sample is correctly classified, and does not explicitly distinguish the margins between categories. In this paper, we propose a supervised contrastive learning objective to minimize intra-class variance by pulling together in-domain intents belonging to the same cla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
30
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 23 publications
(30 citation statements)
references
References 27 publications
0
30
0
Order By: Relevance
“…We show the detailed statistics of CLINC (Larson et al, 2019) and BANKING (Casanueva et al, 2020) In real scenarios, we can use OOD detection models Zeng et al, 2021) to collect high-quality OOD data for OOD intent discovery.…”
Section: Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…We show the detailed statistics of CLINC (Larson et al, 2019) and BANKING (Casanueva et al, 2020) In real scenarios, we can use OOD detection models Zeng et al, 2021) to collect high-quality OOD data for OOD intent discovery.…”
Section: Datasetsmentioning
confidence: 99%
“…Specifically, we firstly learn intent features using a context encoder like BERT, then add two independent transformation heads (instance-level head f and class-level head g) on top of BERT. In the IND pre-training stage, we use the head f to perform supervised instance-level contrastive learning Khosla et al, 2020;Gunel et al, 2021;Zeng et al, 2021) and the head g to compute traditional classification loss like cross-entropy. In the OOD clustering stage, we employ similar objectives for these two heads where f is still used for instance-level contrastive learning and g is used to perform class(cluster)-level contrastive learning .…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Lin and Xu (2019) employs an unsupervised density-based novelty detection algorithm, local outlier factor (LOF) to detect OOD intents. Further, Zeng et al (2021a) proposes a supervised contrastive learning objective to learn discriminative intent features. However, these methods ignore the key challenge of OOD detection, over-confidence.…”
Section: Ground Truth Input Sentencementioning
confidence: 99%
“…Out-of-domain (OOD) detection is a key component of the task-oriented dialogue system (Gnewuch et al, 2017;Akasaki and Kaji, 2017;Shum et al, 2018;Tulshan and Dhage, 2019). It aims to decide whether a user query falls outside the range of predefined supported intents and avoid performing wrong operations (Lin and Xu, 2019;Xu et al, 2020;Zeng et al, 2021a). Due to the complexity of annotating OOD intents, most work focus on unsupervised OOD detection where there is no labeled OOD data but only labeled in-domain (IND) data (Xu et al, 2020).…”
Section: Introductionmentioning
confidence: 99%