2020
DOI: 10.3389/fevo.2020.00035
|View full text |Cite
|
Sign up to set email alerts
|

How Should We Quantify Uncertainty in Statistical Inference?

Abstract: An inferential statement is any statement about the parameters, form of the underlying process or future outcomes. An inferential statement, that provides an approximation to the truth, becomes "statistical" only when there is a measure of uncertainty associated with it. The uncertainty of an inferential statement is generally quantified in terms of probability of the strength of approximation to the truth. This is what we term "inferential uncertainty." Answer to this question has significant implications in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
10

Relationship

1
9

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 54 publications
0
8
0
Order By: Relevance
“…Many factors influence the final trained model, which include the initial randomly assigned weights, the process of training (e.g., the size and order of the mini-batches), and the regularization method (e.g., early stopping, lasso). This phenomenon is not common in traditional statistical analysis, but has received some attention [ 27 , 28 ]. We plan to investigate these uncertainty issues in the context of DNN models in the future.…”
Section: Discussionmentioning
confidence: 99%
“…Many factors influence the final trained model, which include the initial randomly assigned weights, the process of training (e.g., the size and order of the mini-batches), and the regularization method (e.g., early stopping, lasso). This phenomenon is not common in traditional statistical analysis, but has received some attention [ 27 , 28 ]. We plan to investigate these uncertainty issues in the context of DNN models in the future.…”
Section: Discussionmentioning
confidence: 99%
“…As an example we mention the molecular-level data collected via 30], their use opens up new challenges in understanding the huge amount of generated data: from new approaches to deal with high data volumes generated at higher and higher speeds and that could be presented in a variety of forms (structured, semi-structured, and/or unstructured data) [24], to new approaches to deal with data heterogeneity [22,24], or deal with incomplete data [21,24] or even irreproducible data-which is a major issue at least in immunology and cell biology [31,32], and even challenges in understanding the biological mechanisms behind the data [33]. While artificial intelligence techniques (e.g., machine learning, natural language processing, computational intelligence) can provide faster and more accurate results in data analytics compared to classical statistical methods [24] (especially if the training data is not biased in any way) they don't provide us with a mechanistic understanding of the data.…”
Section: Data Model Parametrization Uncertaintymentioning
confidence: 99%
“…If the data are highly informative either by design or by chance, we should be quite confident about our estimate of the total population size, irrespective of what other experimenters might observe. It can be shown (see review in Lele, 2020b) that prediction of a new observation based on local uncertainty is more accurate than prediction based on global uncertainty. However, this result also depends on correct model specification.…”
Section: Local Uncertainty In Evidencementioning
confidence: 99%