2004
DOI: 10.1214/009053604000000238
|View full text |Cite
|
Sign up to set email alerts
|

Optimal predictive model selection

Abstract: Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss. Under the Bayesian approach, it is commonly perceived that the optimal predictive model is the model with highest posterior probability, but this is not necessarily the case. In this paper we show that, for selection among normal linear models, the optimal predictive model is often the median probability model, which is defined as the model consis… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

11
681
0
1

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 807 publications
(712 citation statements)
references
References 24 publications
11
681
0
1
Order By: Relevance
“…For each sample group, we then select the set of edges that appear with marginal posterior probability (PPI) > 0.5. Although this rule was proposed by Barbieri and Berger (2004) in the context of prediction rather than structure discovery, we found that it resulted in a reasonable expected false discovery rate (FDR). Following Newton et al (2004), we let Ο k,ij represent 1 - the marginal posterior probability of inclusion for edge ( i, j ) in graph k .…”
Section: Posterior Inferencementioning
confidence: 82%
“…For each sample group, we then select the set of edges that appear with marginal posterior probability (PPI) > 0.5. Although this rule was proposed by Barbieri and Berger (2004) in the context of prediction rather than structure discovery, we found that it resulted in a reasonable expected false discovery rate (FDR). Following Newton et al (2004), we let Ο k,ij represent 1 - the marginal posterior probability of inclusion for edge ( i, j ) in graph k .…”
Section: Posterior Inferencementioning
confidence: 82%
“…It is often of interest additionally to select the important variables affecting the response y. In the context of variable selection in a normal linear model, Barbieri and Berger (2004) advocated using the median probability model consisting of the variables with posterior marginal inclusion probability greater than or equal to half. They also proved the predictive optimality of such models.…”
Section: Resultsmentioning
confidence: 99%
“…For prediction purpose, a Bayesian Model Averaged (BMA) approach is more consistent with fundamental Bayesian idea. As a matter of fact, Barbieri and Berger [45] have proved that the best prediction model is the median probability model (which is often not the highest probability model) for mean squared error loss function. However, in our particular case, we didn’t use the Bayesian Model Average (BMA) approach for the following reasons: (1) For targeted therapy development, it’s more important to focus on just a few biomarkers that can differentiate treatment effects.…”
Section: Discussionmentioning
confidence: 99%