2009
DOI: 10.1198/jcgs.2009.08027
|View full text |Cite
|
Sign up to set email alerts
|

Transdimensional Sampling Algorithms for Bayesian Variable Selection in Classification Problems With Many More Variables Than Observations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
30
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(31 citation statements)
references
References 26 publications
1
30
0
Order By: Relevance
“…To assess the sensitivity of the Bayesian results to the inputs of hyperparameters in the prior distributions, we reanalyzed the data set by using different values of c, π i , h, R 0 , ρ 0 , and τ . For instance, using c = 5 as suggested by Lamnisos et al (2009), π i = 0.007, h = 200, R 0 = 4I, ρ 0 = 6, and τ = 0.005, the identification of the relevant genes and the performance of classification are essentially the same as before.…”
Section: Leukemia Datamentioning
confidence: 64%
See 1 more Smart Citation
“…To assess the sensitivity of the Bayesian results to the inputs of hyperparameters in the prior distributions, we reanalyzed the data set by using different values of c, π i , h, R 0 , ρ 0 , and τ . For instance, using c = 5 as suggested by Lamnisos et al (2009), π i = 0.007, h = 200, R 0 = 4I, ρ 0 = 6, and τ = 0.005, the identification of the relevant genes and the performance of classification are essentially the same as before.…”
Section: Leukemia Datamentioning
confidence: 64%
“…Sha et al (2004) proposed an algorithm that is based on a multinomial probit model by using adding/deleting and swapping algorithm. According to Lamnisos et al (2009), this kind of algorithm that randomly chooses to either add or delete a single explanatory variable, or to swap two explanatory variables in the model often leads to high model acceptance rates when the number of variables is substantially larger than the sample size. Moreover, the Metropolis random walk suggested by Sha et al (2004) with local proposals and high acceptance rate is often associated with the poor mixing of MCMC chains.…”
Section: Introductionmentioning
confidence: 99%
“…The analytical expression for the marginal density,     M , f Z Y , facilitates the application of adaptive MCMC methods for variable selection proposed by Lamnisos et al (2009Lamnisos et al ( , 2013, which are very efficient when the number of independent variables is large. These algorithms are based on the Random Walk Metropolis sampler with three possible movements: "Addition", "Deletion" and "Swapping" of regressors, which are uniformly chosen at random.…”
Section: Model Selectionmentioning
confidence: 99%
“…For the intercept α, according to Sha et al (2004) and Lamnisos et al (2009), a univariate normal is adopted here…”
Section: Prior Specificationmentioning
confidence: 99%