2021
DOI: 10.48550/arxiv.2110.03260
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Uncertainty-aware Loss Function for Training Neural Networks with Calibrated Predictions

Abstract: Uncertainty quantification of machine learning and deep learning methods plays an important role in enhancing trust to the obtained result. In recent years, a numerous number of uncertainty quantification methods have been introduced. Monte Carlo dropout (MC-Dropout) is one of the most wellknown techniques to quantify uncertainty in deep learning methods. In this study, we propose two new loss functions by combining cross entropy with Expected Calibration Error (ECE) and Predictive Entropy (PE). The obtained r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…Recently, Authors in [67] have proposed an interesting idea for optimizing a simple MCD model, which yielded higher Uacc for the model without sacrificing model accuracy. To this end, a specific loss function is used that takes into account both Uacc and the accuracy of the model:…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, Authors in [67] have proposed an interesting idea for optimizing a simple MCD model, which yielded higher Uacc for the model without sacrificing model accuracy. To this end, a specific loss function is used that takes into account both Uacc and the accuracy of the model:…”
Section: Resultsmentioning
confidence: 99%
“…In an ideal scenario, high uncertainty is assigned to model predictions whenever the model is not confident about its predictions. In [67], the uncertainty-aware C loss function was introduced to improve MCD performance. Although the results were acceptable, they tunned hyperparameters by hand.…”
Section: Datasetmentioning
confidence: 99%
“…In [26], the targeted dropout was suggested to leverage robustness to subsequent pruning when training a large and sparse network using a simple self-reinforcing sparsity criterion. In [27], authors combined Expected Calibration Error (ECE) and Predictive Entropy (PE) with cross-entropy to form two new loss functions. The proposed loss functions improve the uncertainty estimate of the MC dropout model.…”
Section: Introductionmentioning
confidence: 99%