2020
DOI: 10.1007/978-3-030-58580-8_7
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Models for Open Set Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
66
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 99 publications
(66 citation statements)
references
References 14 publications
0
66
0
Order By: Relevance
“…Jiang et al (2018) propose to use scores based on the relative distance of the predicted class to the second class. Recently, residual flow-based methods were used to obtain a density model for OOD detection (Zhang et al, 2020). Ji et al (2021) proposed a method based on subfunction error bounds to compute unreliability per sample.…”
Section: Prior Workmentioning
confidence: 99%
“…Jiang et al (2018) propose to use scores based on the relative distance of the predicted class to the second class. Recently, residual flow-based methods were used to obtain a density model for OOD detection (Zhang et al, 2020). Ji et al (2021) proposed a method based on subfunction error bounds to compute unreliability per sample.…”
Section: Prior Workmentioning
confidence: 99%
“…Density-based anomaly detection is applied in practice [ 25 , 28 , 34 , 41 , 42 , 43 ] as follows: first, learn a density estimator to approximate the data density , and then plug that estimate in the density-based methods from Section 2.2 and Section 2.3 to discriminate between inliers and outliers. Recent empirical failures [ 3 , 26 , 27 ] of this procedure applied to density scoring have been attributed to the discrepancy between and [ 28 , 33 , 34 , 35 , 48 ]. Instead, we choose in this paper to question the fundamental assumption that these density-based methods should result in a correct classification between outliers and inliers.…”
Section: The Role Of Reparametrizationmentioning
confidence: 99%
“…For instance [ 26 , 27 , 28 ] noticed that generative models trained on a benchmark dataset (e.g., CIFAR-10, [ 29 ]) and tested on another (e.g., SVHN, [ 30 ]) are not able to identify the latter as an outlier with current methods. Different hypotheses have been formulated to explain that discrepancy, ranging from the curse of dimensionality [ 31 ] to a significant mismatch between and [ 26 , 32 , 33 , 34 , 35 , 36 ].…”
Section: Introductionmentioning
confidence: 99%
“…Improving OOD detection has seen progress by training generative models [6; 7; 2; 8], and modifying objective and loss functions [9]. Exposure to a number of OOD samples during training has also lead to improvements [10].…”
Section: Introductionmentioning
confidence: 99%