Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.447
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Self-Supervised Learning for Out-of-Domain Detection

Abstract: Detecting out-of-domain (OOD) intents is crucial for the deployed task-oriented dialogue system. Previous unsupervised OOD detection methods only extract discriminative features of different in-domain intents while supervised counterparts can directly distinguish OOD and in-domain intents but require extensive labeled OOD data. To combine the benefits of both types, we propose a selfsupervised contrastive learning framework to model discriminative semantic features of both in-domain intents and OOD intents fro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 24 publications
0
11
0
Order By: Relevance
“…In-domain Samples Contrastive Learning Recent contrastive learning methods have proven effective to learn unsupervised representations for downstream tasks. Winkens et al (2020); Zeng et al (2021b) combine cross-entropy loss on labeled IND data and instance-wise contrastive learning (CL) loss on unlabeled data (including unlabeled IND and OOD intents). They require a large amount of unlabeled corpus and can't explicitly distinguish different intent types.…”
Section: Encodermentioning
confidence: 99%
“…In-domain Samples Contrastive Learning Recent contrastive learning methods have proven effective to learn unsupervised representations for downstream tasks. Winkens et al (2020); Zeng et al (2021b) combine cross-entropy loss on labeled IND data and instance-wise contrastive learning (CL) loss on unlabeled data (including unlabeled IND and OOD intents). They require a large amount of unlabeled corpus and can't explicitly distinguish different intent types.…”
Section: Encodermentioning
confidence: 99%
“…For example, Lin et al [6] and Yan et al [7] propose to first learn discriminative deep features through large margin cosine loss and Gaussian mixture loss, and then apply the local outlier factor algorithm to detect open intents. Xu et al [9] and Zeng et al [11] also propose to first learn discriminative deep features through large margin cosine loss and self-supervised contrastive loss, and then apply Mahalanobis distance to detect open intents. Considering the feature learning is optimized separately from decision boundary learning in these methods, Zhang et al [10] proposes a joint optimized adaptive decision boundary method for outlier detection.…”
Section: Open Intentsmentioning
confidence: 99%
“…Xu et al [9] proposes BiLSTM with LMLC as a feature extractor and uses Gaussian discriminant analysis (GDA) and Mahalanobis distance to detect open intents. Zeng et al [11] proposes a contrastive learning framework to model discriminative feature and also uses GDA and Mahalanobis distance to detect open intents. Similar to the post-processing methods, the joint-optimization methods also detect open intents in a post-processing way, but they consider to jointly learn the feature space and the decision boundary of outlier detection.…”
Section: A Open Intent Classificationmentioning
confidence: 99%
“…The adversarial inputs lead to serious concerns about AI safety. However, despite the existing literature in adversarial training with SSL that works on algorithm design and empirical study, e.g., Kim et al (2020); Zeng et al (2021); Ho and Vasconcelos (2020); Gowal et al (2020); Chen et al (2020b), there is little theoretical understanding towards this.…”
Section: Introductionmentioning
confidence: 99%