2021
DOI: 10.1016/j.inffus.2021.05.008
|View full text |Cite
|
Sign up to set email alerts
|

A review of uncertainty quantification in deep learning: Techniques, applications and challenges

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
452
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 1,071 publications
(453 citation statements)
references
References 366 publications
0
452
0
1
Order By: Relevance
“…There are many tools for estimating uncertainty, such as bootstrapping, quantile regression, Bayesian inference, and dropout for the neural networks [35][36][37]. Depending on different models, one technique may be more suitable than others; therefore, they will be described in more detail in the subsequent sections, which introduce the tested models.…”
Section: Data-driven Models and Uncertainty Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…There are many tools for estimating uncertainty, such as bootstrapping, quantile regression, Bayesian inference, and dropout for the neural networks [35][36][37]. Depending on different models, one technique may be more suitable than others; therefore, they will be described in more detail in the subsequent sections, which introduce the tested models.…”
Section: Data-driven Models and Uncertainty Estimationmentioning
confidence: 99%
“…One way to mitigate and manage the risk of making dangerous decisions is to estimate the prediction uncertainty. There are a number of methods for estimating it with an NN, such as Bayesian methods and bagging [36], but the most popular is certainly by using dropout [38]. This technique was originally developed to avoid the co-adaptation of the parameters during the training of the network in order to reduce overfitting and improve the generalization error.…”
Section: Feed-forward Neural Networkmentioning
confidence: 99%
“…However, few datasets exist with train and test subsets for applications of machine learning and deep learning methods with UQ in regression 4,5 . UQ is getting vast popularity due to its demand in ML and DL methods [6][7][8][9] . In most of the previous studies, researchers in UQ are splitting the dataset randomly 10,11 .…”
Section: Background and Summarymentioning
confidence: 99%
“…We train networks for point prediction. Then, we compute the upper and lower bounds with a Gaussian and homoscedastic uncertainty assumption 9,28 . In the Gaussian and homoscedastic uncertainty assumption, the PI is presented as [µ − zσ , µ + zσ ].…”
Section: Technical Validation: Model Training For Initial Performancementioning
confidence: 99%
“…In recent years, machine learning algorithms have become a more robust approach in landslide research [43], but the models require managing uncertainties. These uncertainties could result from errors and model variability [44], difficulties in the selection of parameters [45], system understanding [46], the weighting of parameters [47], and human judgment [48]. Moreover, machine learning may encounter prediction errors if trained with a small data set [43].…”
Section: Introductionmentioning
confidence: 99%