2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00119
|View full text |Cite
|
Sign up to set email alerts
|

CODEs: Chamfer Out-of-Distribution Examples against Overconfidence Issue

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…As shown in the simulation, the designed self-triggered algorithm is reasonable in terms of avoiding Zeno behavior. However, just quasisynchronization and quasi-anti-synchronization are considered, from the projective factor point of view, the time-varying factor is more universal for application, especially in social networks (Cheng et al, 2022), object/scene reconstruction (Tang et al, 2021), and 3D object recognition etc. Therefore, in future study, the obtained scheme will be developed for projective quasi-synchronization with time-varying projective factors and more practical networks will be considered.…”
Section: Discussionmentioning
confidence: 99%
“…As shown in the simulation, the designed self-triggered algorithm is reasonable in terms of avoiding Zeno behavior. However, just quasisynchronization and quasi-anti-synchronization are considered, from the projective factor point of view, the time-varying factor is more universal for application, especially in social networks (Cheng et al, 2022), object/scene reconstruction (Tang et al, 2021), and 3D object recognition etc. Therefore, in future study, the obtained scheme will be developed for projective quasi-synchronization with time-varying projective factors and more practical networks will be considered.…”
Section: Discussionmentioning
confidence: 99%
“…The most commonly used training criterion in training deep neural networks is the softmax cross-entropy loss. However, previous works (Tang et al 2021;Liu et al 2020) show that directly training with this loss results in overconfidence issues, where the maximum softmax activation value always approaches one in despite of whether the data is from training data distribution or not. Previous works have shown that other criteria such as the Helmholtz free energy (Liu et al 2020) or the maximum logit value (Hendrycks et al 2019) are better confidence scores than the maximum softmax value.…”
Section: Related Workmentioning
confidence: 97%
“…Rehearsal-based methods tackle catastrophic forgetting either by keeping a small set of old training examples in memory (Tao et al 2020a;Dong et al 2021;Liu et al 2022;Yang et al 2022a,b) or using synthesized data produced by generative models (Shin et al 2017). By using the rehearsal buffer for knowledge distillation and regularization, rehearsal-based methods have achieved state-of-theart results on various benchmarks (Douillard et al 2022;Joseph et al 2022;Zhang et al 2022).…”
Section: Related Workmentioning
confidence: 99%
“…Self-awareness: Self-aware systems have some ability to measure their limitations or predict failures. This includes out-of-distribution detection [78], [79], [80], [81] or open set recognition [82], [83], [84], [85], where classifiers are trained to reject non-sensical images, adversarial attacks, or images from classes on which they were not trained. All these problems require the classifier to produce a confidence score for image rejection.…”
Section: Discriminant Explanation Thresholdmentioning
confidence: 99%