2021
DOI: 10.48550/arxiv.2112.00787
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Provable Guarantees for Understanding Out-of-distribution Detection

Abstract: Out-of-distribution (OOD) detection is important for deploying machine learning models in the real world, where test data from shifted distributions can naturally arise. While a plethora of algorithmic approaches have recently emerged for OOD detection, a critical gap remains in theoretical understanding. In this work, we develop an analytical framework that characterizes and unifies the theoretical understanding for OOD detection. Our analytical framework motivates a novel OOD detection method for neural netw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…We emphasize that in Theorem 1 µ i and µ out can have arbitrary configurations. We refer the reader to the extended version 2 (Morteza and Li 2021) for the proof of Theorem 1 and detailed discussions on other variants.…”
Section: Es(x)mentioning
confidence: 99%
See 2 more Smart Citations
“…We emphasize that in Theorem 1 µ i and µ out can have arbitrary configurations. We refer the reader to the extended version 2 (Morteza and Li 2021) for the proof of Theorem 1 and detailed discussions on other variants.…”
Section: Es(x)mentioning
confidence: 99%
“…We refer the reader to the extended version (Morteza and Li 2021) for the proof of Proposition 2. The next Corollary explains how the performance of our method decreases by increasing the number of classes.…”
Section: Feature Representation Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…Lee et al [24] presented a score using the Mahalanobis distance concerning the closest class-conditional distribution in the feature space of the pre-trained network, which verifies that information of feature space also contributes to OOD detection. Morteza and Li [33] derived analytically an optimal form of OOD scoring function called GEM (Gaussian mixture based Energy Measurement), which is provably aligned with the true log-likelihood for OOD detection. However, post-hoc based methods still assign high-confidence predictions to OOD data due to the lack of supervision signals from OOD data [10].…”
Section: Ood Detection With Post-hocmentioning
confidence: 99%
“…The performance of the near OOD detection experiment when we take CIFAR-10's testing set and all unlabeled data during training as the test set respectively. ± 2 33. 24.90 ± 0.86 39.44 ± 3.52 24.00 ± 5.81 Detection Error ↓ 16.00 ± 0.23 11.69 ± 0.52 15.75 ± 0.27 11.67 ± 1.62 AUPR-In ↑ 93.13 ± 0.44 95.34 ± 0.44 92.86 ± 0.32 95.65 ± 1.04 AUPR-Out ↑ 84.72 ± 0.50 89.69 ± 1.25 86.01 ± 1.71 92.34 ± 1.97 training sets respectively as D out U .…”
mentioning
confidence: 99%