Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1364
|View full text |Cite
|
Sign up to set email alerts
|

Out-of-Domain Detection for Low-Resource Text Classification Tasks

Abstract: Out-of-domain (OOD) detection for lowresource text classification is a realistic but understudied task. The goal is to detect the OOD cases with limited in-domain (ID) training data, since we observe that training data is often insufficient in machine learning applications. In this work, we propose an OODresistant Prototypical Network to tackle this zero-shot OOD detection and few-shot ID classification task. Evaluation on real-world datasets show that the proposed solution outperforms state-of-the-art methods… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
66
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 39 publications
(67 citation statements)
references
References 14 publications
(23 reference statements)
0
66
1
Order By: Relevance
“…We take into account a commonly-used metric for OOS dectection, i.e. equal error rate (EER) (Lane et al, 2007;Ryu et al, 2017Ryu et al, , 2018Tan et al, 2019), which corresponds to the classification error rate when the threshold θ is set to a value where false acceptance rate (FAR) and false rejection rate (FRR) are the closest. These two metrics are defined as: In addition, in-scope error rate (ISER) is considered to report IS performance, i.e.…”
Section: Evaluation Metricsmentioning
confidence: 99%
See 3 more Smart Citations
“…We take into account a commonly-used metric for OOS dectection, i.e. equal error rate (EER) (Lane et al, 2007;Ryu et al, 2017Ryu et al, , 2018Tan et al, 2019), which corresponds to the classification error rate when the threshold θ is set to a value where false acceptance rate (FAR) and false rejection rate (FRR) are the closest. These two metrics are defined as: In addition, in-scope error rate (ISER) is considered to report IS performance, i.e.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…These two metrics are defined as: In addition, in-scope error rate (ISER) is considered to report IS performance, i.e. the accuracy considering only IS samples, as the class error rate in (Tan et al, 2019). This metric is important to evaluate whether the alternative classification methods are able to keep up with the performance of their counterparts in the classification task.…”
Section: Evaluation Metricsmentioning
confidence: 99%
See 2 more Smart Citations
“…Few previous studies are related to both zero-shot OOD detection and few-shot ID classification for low-resource intent detection, which has just begun. For instance, Prototypical Networks (ProtoNet) [10], a metricbased meta-learning for few-shot image classification, has been actively applied to text classification and OOD detection [11,12]. One major limitation of such a method for OOD detection is that it only considers the carefully selected datasets with metadata, like large-scale independent source domains [13].…”
Section: Introductionmentioning
confidence: 99%