1995
DOI: 10.1111/j.2517-6161.1995.tb02015.x
|View full text |Cite
|
Sign up to set email alerts
|

Assessment and Propagation of Model Uncertainty

Abstract: SUMMARY In most examples of inference and prediction, the expression of uncertainty about unknown quantities y on the basis of known quantities x is based on a model M that formalizes assumptions about how x and y are related. M will typically have two parts: structural assumptions S, such as the form of the link function and the choice of error distribution in a generalized linear model, and parameters θ whose meaning is specific to a given choice of S. It is common in statistical theory and practice to ackno… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
828
1
4

Year Published

1999
1999
2010
2010

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 1,144 publications
(859 citation statements)
references
References 80 publications
6
828
1
4
Order By: Relevance
“…It accounts for uncertainty about the model by taking an average of the posterior distribution of a quantity of interest, weighted by the posterior probabilities of several potential models. Following the previously discussed work of Madigan and Raftery (1994), the idea of model averaging was developed further by Draper (1995) and Raftery, Madigan, and Hoeting (1997). In their review article, Hoeting et al (1999) discussed model averaging in the context of GLMs.…”
Section: Multivariate Response Extensions and Other Glmsmentioning
confidence: 98%
“…It accounts for uncertainty about the model by taking an average of the posterior distribution of a quantity of interest, weighted by the posterior probabilities of several potential models. Following the previously discussed work of Madigan and Raftery (1994), the idea of model averaging was developed further by Draper (1995) and Raftery, Madigan, and Hoeting (1997). In their review article, Hoeting et al (1999) discussed model averaging in the context of GLMs.…”
Section: Multivariate Response Extensions and Other Glmsmentioning
confidence: 98%
“…The de"ciency becomes apparent when data from new studies are examined and the original model is found to be seriously biased. Several authors (for example, Draper [24]) have recognized and suggested ways of trying to overcome a related problem, that of model uncertainty; statistical inference is routinely applied to the parameters as though the model were the true and only one, without taking into account many others that were tried and rejected during data exploration. Since data analysis and the selection of a model are complex, ill-de"ned and often highly subjective processes, it is unlikely that any satisfactory procedure will ever be found to accommodate model uncertainty adequately.…”
Section: Discussionmentioning
confidence: 99%
“…This approach, called Bayesian model averaging-or more specifically maximum likelihood Bayesian model averaging (MLBMA), in the case of Neuman (2003)-uses a more statistically consistent methodology to assess the Bayesian posterior probabilities for a given conceptual model. While approaches such as Draper (1995), Kass and Raftery (1995), and Hoeting et al (1999) rely on extensive Monte Carlo simulations to calculate the probabilities, Neuman (2003) proposed using likelihood measures such as Kashyap information criterion (KIC;Kashyap 1982) and Bayesian information criterion (BIC; Schwarz 1978) to weight different models, thus obviating the need for extensive simulations.…”
Section: Background On Uncertainty Analysismentioning
confidence: 99%
“…The other major framework for addressing model uncertainty is the formal Bayesian approach, which has been used by Draper (1995), Kass and Raftery (1995), Hoeting et al (1999), Woodbury and Ulrych (2000), Neuman (2003), Neuman and Wierenga (2003), and Ye et al (2004), among others. This approach, called Bayesian model averaging-or more specifically maximum likelihood Bayesian model averaging (MLBMA), in the case of Neuman (2003)-uses a more statistically consistent methodology to assess the Bayesian posterior probabilities for a given conceptual model.…”
Section: Background On Uncertainty Analysismentioning
confidence: 99%