2021
DOI: 10.1016/j.neucom.2021.02.007
|View full text |Cite
|
Sign up to set email alerts
|

Outlier exposure with confidence control for out-of-distribution detection

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 56 publications
(28 citation statements)
references
References 6 publications
0
26
0
Order By: Relevance
“…We build upon the existing leading methods: Baseline (Hendrycks and Gimpel, 2017), ODIN (Liang et al, 2018), Gram (Sastry and Oore, 2020), OECC (Papadopoulos et al, 2021), and Energy (Liu et al, 2020). We use the following pretrained models: ResNet-34 (He et al, 2016), DenseNet-BC- Table 3 shows the results of the Energy method and our method when combined with the pretrained Energy model (Energy+pNML) with WideResNet-40 model on CIFAR-100 and CIFAR-10 as IND sets.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We build upon the existing leading methods: Baseline (Hendrycks and Gimpel, 2017), ODIN (Liang et al, 2018), Gram (Sastry and Oore, 2020), OECC (Papadopoulos et al, 2021), and Energy (Liu et al, 2020). We use the following pretrained models: ResNet-34 (He et al, 2016), DenseNet-BC- Table 3 shows the results of the Energy method and our method when combined with the pretrained Energy model (Energy+pNML) with WideResNet-40 model on CIFAR-100 and CIFAR-10 as IND sets.…”
Section: Resultsmentioning
confidence: 99%
“…The Energy method (Liu et al, 2020) adds a cost function to the training phase to shape the energy surface explicitly for OOD detection. Papadopoulos et al (2021) suggested the outlier exposure with confidence control (OECC) method in which a regularization term is added to the loss function such that the model produces a uniform distribution for OOD samples.…”
Section: Dnn Adaptationmentioning
confidence: 99%
See 1 more Smart Citation
“…By training one class versus all the other classes, the DNN learns in some sense the out of distribution classes, however with the significant advantage of not relying on explicitly provided OOD data, in contrast to other strategies [46], [53]. Thanks to this strategy, the DNN learns to better distinguish between objects from known classes and unknown objects from classes not seen during training.…”
Section: From One Versus All (Ova) To Ovnnimentioning
confidence: 99%
“…We train it on AVA logits. Note that we have not compared our OVNNI to techniques trained to learn OOD such as [46], [53], since in these case the OOD data are in the training set, making this technique able to detect just with trained OOD data. To balance OVA training which typically has more samples available for the "All" class, we use weighted cross-entropy to train for each class, with weights for a given class based on 1 − τ class , where τ class is the proportion of data samples of this class in the training set.…”
Section: A Experimental Protocolmentioning
confidence: 99%