2020
DOI: 10.1080/01621459.2020.1765784
|View full text |Cite
|
Sign up to set email alerts
|

Spike-and-Slab Group Lassos for Grouped Regression and Sparse Generalized Additive Models

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(24 citation statements)
references
References 46 publications
0
24
0
Order By: Relevance
“…This prior can be reformulated as a likelihood penalty function that represents a combination of weak penalization of larger effects by λ 1, l , k and strong penalization of effects close to zero by λ 0, l , k , respectively (See Supplementary Material Section 1.2 ). As recommended by Ročková and George (2018) , we use the non-separable version of the spike-and-slab lasso prior, which provides self-adaptivity of the sparsity level and an automatic control for multiplicity via a Beta prior on θ ( Bai et al (2020a) ; Scott and Berger (2010) ). We further set λ 0, l , k = 50 ∀l , k to achieve a strong penalization in the “spike” part of the prior, leaving λ 1, l , k as our only parameter that controls the total amount of penalty applied at larger effect values.…”
Section: Methodsmentioning
confidence: 99%
“…This prior can be reformulated as a likelihood penalty function that represents a combination of weak penalization of larger effects by λ 1, l , k and strong penalization of effects close to zero by λ 0, l , k , respectively (See Supplementary Material Section 1.2 ). As recommended by Ročková and George (2018) , we use the non-separable version of the spike-and-slab lasso prior, which provides self-adaptivity of the sparsity level and an automatic control for multiplicity via a Beta prior on θ ( Bai et al (2020a) ; Scott and Berger (2010) ). We further set λ 0, l , k = 50 ∀l , k to achieve a strong penalization in the “spike” part of the prior, leaving λ 1, l , k as our only parameter that controls the total amount of penalty applied at larger effect values.…”
Section: Methodsmentioning
confidence: 99%
“…We use three minibatch sizes of m ∈ { n 4 , n 2 , n} to test how this affects performance. We also implement the following models with embedded variable selection: (LASSO (Tibshirani, 1996), MCP (Zhang, 2010), GAM with group LASSO penalty (Huang et al, 2009) and GAM with group spike and slab LASSO (SS LASSO) penalty (Bai et al, 2020). The LASSO and MCP were implemented using the R ncvreg package (Breheny and Breheny, 2021), the GAM LASSO and GAM spike and slab LASSO (SSwere implemented using the R package sparseGAM (Bai, 2021).…”
Section: Methodsmentioning
confidence: 99%
“…We also implement the ML-II GP and several benchmark variable selection algorithms: LASSO (Tibshirani, 1996), MCP (Zhang, 2010), GAM with group LASSO penalty (Huang et al, 2009) and GAM with group spike and slab LASSO penalty (Bai et al, 2020). We implement our SSVGP with minibatch sizes m ∈ { n 4 , n 2 , n} and the same settings as previously.…”
Section: Experiments 1: a Small-scale High Dimensional Variable Selec...mentioning
confidence: 99%
See 1 more Smart Citation
“…This prior can be reformulated as a likelihood penalty function that finds a balance between weak and strong penalization by λ 1 and λ 0 , respectively (See Supplementary material section 1.2). As recommended by Ročková and George (2018), we use the non-separable version of the spike-and-slab lasso prior, which provides self-adaptivity of the sparsity level and an automatic control for multiplicity via a Beta prior on θ (Bai et al (2020a); Scott and Berger (2010)). We further set λ 0,l,k = 50 ∀k to achieve a strong penalization in the "spike" part of the prior, leaving λ 1,l,k as our only parameter that controls the total amount of penalty applied at larger effect values.…”
Section: Spike-and-slab Lasso Priormentioning
confidence: 99%