2018
DOI: 10.1002/asmb.2381
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian l0‐regularized least squares

Abstract: Bayesian l 0 -regularized least squares is a variable selection technique for high-dimensional predictors. The challenge is optimizing a nonconvex objective function via search over model space consisting of all possible predictor combinations. Spike-and-slab (aka Bernoulli-Gaussian) priors are the gold standard for Bayesian variable selection, with a caveat of computational speed and scalability. Single best replacement (SBR) provides a fast scalable alternative. We provide a link between Bayesian regularizat… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 47 publications
(73 reference statements)
0
4
0
Order By: Relevance
“…The design of the nonconvex penalties that do not lead to a shrinkage of the basin is another possibility for nonconvex compressed sensing. From the relationship between the sparse prior in the Bayesian approach and the sparse penalty in the frequentist approach, it is implied that SCAD and MCP can be related to the Bernoulli-Gaussian prior in Bayesian terminology with large variance [25]. A unified understanding of the sparsity over the Bayesian and frequentist approach will be helpful for designing such desirable sparse penalties.…”
Section: Summary and Discussionmentioning
confidence: 99%
“…The design of the nonconvex penalties that do not lead to a shrinkage of the basin is another possibility for nonconvex compressed sensing. From the relationship between the sparse prior in the Bayesian approach and the sparse penalty in the frequentist approach, it is implied that SCAD and MCP can be related to the Bernoulli-Gaussian prior in Bayesian terminology with large variance [25]. A unified understanding of the sparsity over the Bayesian and frequentist approach will be helpful for designing such desirable sparse penalties.…”
Section: Summary and Discussionmentioning
confidence: 99%
“…L0L2 penalized regression allows for control over the number of selected SNPs (L0) and shrinkage of effect sizes to avoid overfitting (L2). These two penalties together can improve prediction performance by better leveraging SNPs in high LD compared to the L1 penalty used in Lassosum, and also have a conceptual connection to the Gaussian spike-and-slab prior commonly used in statistical genetics 13,15,33,[42][43][44][45][46] . Using a fast coordinate optimization algorithm, we can efficiently generate effect estimates, with a closed-form at each iteration, given a grid of L0 and L2 tuning parameter combinations.…”
Section: Overview Of All-sum Frameworkmentioning
confidence: 99%
“…discuss theoretical foundations of sparse rectified‐linear‐unit (ReLU) networks and the advantage of using Spike‐and‐Slab prior as an alternative to Dropout. They show that the resulting posterior prediction of ReLU networks with Spike‐and‐Slab regularization converge to a true function at a rate of logδfalse(nfalse)false/nprefix−K$$ lo{g}^{\delta }(n)/{n}^{-K} $$ for δ>1$$ \delta >1 $$ and a positive constant K$$ K $$ (with n$$ n $$ number of observations) 34 . provide a theoretical connection between the Spike‐and‐Slab priors and L0$$ {L}_0 $$ norm regularization.…”
Section: Introductionmentioning
confidence: 99%
“…They show that the resulting posterior prediction of ReLU networks with Spike-and-Slab regularization converge to a true function at a rate of log 𝛿 (n)∕n −K for 𝛿 > 1 and a positive constant K (with n number of observations). 34 provide a theoretical connection between the Spike-and-Slab priors and L 0 norm regularization. They demonstrate that the regularized estimators can result in improved out-of-sample prediction performance.…”
Section: Introductionmentioning
confidence: 99%