2005
DOI: 10.1214/009053604000001147
|View full text |Cite
|
Sign up to set email alerts
|

Spike and slab variable selection: Frequentist and Bayesian strategies

Abstract: Variable selection in the linear regression model takes many apparent faces from both frequentist and Bayesian standpoints. In this paper we introduce a variable selection method referred to as a rescaled spike and slab model. We study the importance of prior hierarchical specifications and draw connections to frequentist generalized ridge regression estimation. Specifically, we study the usefulness of continuous bimodal priors to model hypervariance parameters, and the effect scaling has on the posterior mean… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
710
0
1

Year Published

2005
2005
2019
2019

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 817 publications
(714 citation statements)
references
References 38 publications
3
710
0
1
Order By: Relevance
“…Based on the data, regression coefficients close to zero will be assigned to the spike, resulting in shrinkage towards 0, while coefficients that deviate substantially from zero will be assigned to the slab, resulting in (almost) no shrinkage. Early proposals of mixture priors can be found in George and McCulloch (1993) and Mitchell and Beauchamp (1988), and a scale mixture of normals formulation can be found in Ishwaran and Rao (2005). We will consider the following specification of the mixture prior:…”
Section: Discrete Normal Mixturementioning
confidence: 99%
See 1 more Smart Citation
“…Based on the data, regression coefficients close to zero will be assigned to the spike, resulting in shrinkage towards 0, while coefficients that deviate substantially from zero will be assigned to the slab, resulting in (almost) no shrinkage. Early proposals of mixture priors can be found in George and McCulloch (1993) and Mitchell and Beauchamp (1988), and a scale mixture of normals formulation can be found in Ishwaran and Rao (2005). We will consider the following specification of the mixture prior:…”
Section: Discrete Normal Mixturementioning
confidence: 99%
“…Due to these advantages, Bayesian penalization is becoming increasingly popular in the literature (see e.g., Alhamzawi et al, 2012;Andersen et al, 2017;Armagan et al, 2013;Bae and Mallick, 2004;Bhadra et al, 2016;Bhattacharya et al, 2012;Bornn et al, 2010;Caron and Doucet, 2008;Carvalho et al, 2010;Feng et al, 2017;Griffin and Brown, 2017;Hans, 2009;Ishwaran and Rao, 2005;Lu et al, 2016;Peltola et al, 2014;Polson and Scott, 2011;Roy and Chakraborty, 2016;Zhao et al, 2016). An active area of research investigates theoretical properties of priors for Bayesian penalization, such as the Bayesian lasso prior (for a recent overview, see Bhadra et al, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…BayesB is equivalent to the stochastic search variable selection method, which was originally developed by George and McCulloch (1993) and has been applied to QTL mapping by Yi et al (2003) and Wang et al (2005). BayesCp is still the stochastic search variable selection method with variable p and has been used by Ishwaran and Rao (2005) (who named it the spike and slab variable selection) and Xu (2007). From these points of view, the three 'BayesT' methods (BayesTA, BayesTB and BayesTCp) proposed herein may also be regarded as thresholdmodel-versions of the Bayesian shrinkage method and the stochastic search variable selection method.…”
Section: Common Data Set Of the Fourteenth Qtl-mas Workhopmentioning
confidence: 99%
“…A normal posterior simplifies the MCMC sampling process because the Gibbs sampler can be used to draw the regression coefficient. Other prior distributions have been proposed, e.g., the mixture prior of two normal distributions (George and Mcmulloch 1993;Yi et al 2003) and the spike and slab model (Ishwaran and Rao 2005). A t-distribution may also be used as a prior for the regression coefficient.…”
Section: Discussionmentioning
confidence: 99%