2019
DOI: 10.48550/arxiv.1904.12220
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Analysis of Confident-Classifiers for Out-of-distribution Detection

Abstract: Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution). Therefore, detecting outof-distribution (OOD) samples is very important to avoid classification errors. In the context of OOD detection for image classification, one of the recent approaches proposes training a classifier called "confident-classifier" by minimizing the standard cross-entropy loss on in-distribution samples and minimizing the KL divergence between the pred… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 8 publications
(11 reference statements)
0
5
0
Order By: Relevance
“…Unfortunately, both BNNs and NLMs struggle with modeling OOD uncertainty. While BNNs are equivalent to GPs in the limit of infinite width (Neal, 1996), recent work shows that, unlike GPs, the epistemic uncertainty of finitesized BNN classifiers does not increase in data-poor regions (Vernekar et al, 2019b). In this work, we show that NLM likewise struggles with providing high epistemic uncertainty for OOD data, irrespective of the architecture chosen.…”
Section: Introductionmentioning
confidence: 64%
See 1 more Smart Citation
“…Unfortunately, both BNNs and NLMs struggle with modeling OOD uncertainty. While BNNs are equivalent to GPs in the limit of infinite width (Neal, 1996), recent work shows that, unlike GPs, the epistemic uncertainty of finitesized BNN classifiers does not increase in data-poor regions (Vernekar et al, 2019b). In this work, we show that NLM likewise struggles with providing high epistemic uncertainty for OOD data, irrespective of the architecture chosen.…”
Section: Introductionmentioning
confidence: 64%
“…They fall into two categories. The first constrains a classifier to output highentropy predictions on a priori OOD examples (Liang et al, 2017;Lee et al, 2017;Sricharan & Srivastava, 2018), and the second trains a classifier on the original classes, plus an additional class of OOD examples (Vernekar et al, 2019b;a). However, these methods do not provide uncertainty decomposition.…”
Section: Introductionmentioning
confidence: 99%
“…Outliers: A considerable amount of research investigates how to identify when DNNs are predicting on outof-distribution (OOD) samples. The model's confidence for different outputs can be made more uniform on OOD samples [155]- [160], or the model can be trained explicitly to assign a confidence score which can be used to tell how likely the input was out of distribution [24], [161], [162]. Other methods have also been introduced to distinguish the outputs of DNNs on OOD and in-distribution samples [163]- [167], including approaches based on inspecting the model's internal representations, training set, or performing statistical testing [168]- [172].…”
Section: A Attribution Of Known ML Failuresmentioning
confidence: 99%
“…In the recent past, a pattern has emerged in which the majority of heuristics based defenses (both posthoc detection and training based) are easily broken by new attacks [115,112]. Therefore, the development of a coherent theory and methodology that guides practical design for anomaly detection in DL-based systems [116], and fundamental characterizations of the existence of adversarial examples [117] is of utmost importance. How to leverage special learning properties such as the spatial and temporal consistencies to identify OOD examples [118,119] also worth further exploration.…”
Section: Conclusion and Open Questionsmentioning
confidence: 99%