2020
DOI: 10.1609/aaai.v34i04.6015
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty-Aware Deep Classifiers Using Generative Models

Abstract: Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions. Some recent approaches quantify classification uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-of-distribution samples. However, selection or creation of such an auxiliary data set is non-trivial… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
39
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 42 publications
(39 citation statements)
references
References 5 publications
0
39
0
Order By: Relevance
“…A common approach is to use Bayesian methods, whereby epistemic uncertainty is captured as uncertainty in the model parameters 33 or as uncertainty in function space using, for example, Gaussian processes. 46 Another promising approach is that of evidential learning, 47,48 whereby inputs are mapped to the parameters of a Dirichlet distribution over classes. Smaller parameter values represent less evidence for a class, producing a broader distribution representing greater epistemic uncertainty.…”
Section: Llmentioning
confidence: 99%
“…A common approach is to use Bayesian methods, whereby epistemic uncertainty is captured as uncertainty in the model parameters 33 or as uncertainty in function space using, for example, Gaussian processes. 46 Another promising approach is that of evidential learning, 47,48 whereby inputs are mapped to the parameters of a Dirichlet distribution over classes. Smaller parameter values represent less evidence for a class, producing a broader distribution representing greater epistemic uncertainty.…”
Section: Llmentioning
confidence: 99%
“…To ensure the model's generalization to the whole data space, the choice of effective is crucial. Although generative models have achieved success in the CV domain [19,34], they do not apply to discrete text data. We adopt two methods that have achieved success in the NLP domain to get effective OOD regularization: (i) Using auxiliary OOD datasets; (ii) Generating off-manifold adversarial examples.…”
Section: Approach 31 Calibrating Evidential Neural Networkmentioning
confidence: 99%
“…A critical finding in [17] is that the diversity of the auxiliary dataset is important. Hu et al [19] report that the methods using diverse examples beat the methods that only use close adversarial examples [14,34] in OOD detection in image classification. Our empirical observations also find that randomly generated sentences (we randomly sample words and concatenate them into fake sentences) do not improve the performance.…”
Section: Utilizing Auxiliary Datasetsmentioning
confidence: 99%
See 2 more Smart Citations