2016
DOI: 10.1007/s11222-016-9694-6
|View full text |Cite
|
Sign up to set email alerts
|

Stable prediction in high-dimensional linear models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(18 citation statements)
references
References 23 publications
0
18
0
Order By: Relevance
“…In essence, StabSel is a general ensemble learning procedure which can be used in conjunction with any selection algorithm to boost its selection performance. Like many others , here we focus on the combination of StabSel with the lasso (abbreviated as SSLasso to facilitate later discussions), the most natural combination for high‐dimensional regression problems. In the generation stage, SSLasso applies the lasso repeatedly to subsamples randomly drawn from the training data.…”
Section: Ensemble Pruning Of Stability Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…In essence, StabSel is a general ensemble learning procedure which can be used in conjunction with any selection algorithm to boost its selection performance. Like many others , here we focus on the combination of StabSel with the lasso (abbreviated as SSLasso to facilitate later discussions), the most natural combination for high‐dimensional regression problems. In the generation stage, SSLasso applies the lasso repeatedly to subsamples randomly drawn from the training data.…”
Section: Ensemble Pruning Of Stability Selectionmentioning
confidence: 99%
“…During the review process, we were made aware of some recent works that also aim to re‐weigh individual ensemble members, for example, the “random splitting model averaging” (RSMA) procedure and the VSD‐selector (VSD‐S) which weighs individual ensemble members using a so‐called variable selection deviation metric . Both of these methods use weighting schemes that are much more elaborate than ours.…”
Section: Introductionmentioning
confidence: 99%
“…A specific case [26] of model-independent approach limited to linear models (with arbitrary solution algorithm and hyper-parameters) provides good results for heteroscedastic datasets ( [26], supplementary materials), and suits for ELM output layer solution as well. The method applies to any amount of training data, and will benefit from huge datasets by producing more independent models in its ensemble part.…”
Section: State-of-the-artmentioning
confidence: 99%
“…Thus, we increased the factor multiplying in Λ because is small in this simulation. We compared it with traditional stepwise search as well as some VSE techniques including BSS [20], PGA [18], RSMA [24], and ST2E [22]. The parameters involved in these methods were set according to the related literature.…”
Section: Simulation 3: Performance Comparison With Several Othermentioning
confidence: 99%
“…Another good property of stability selection is that it provides an effective way to control false discovery rate (FDR) in finite sample cases provided that its tuning parameters are set properly. Due to its versatility and flexibility, stability selection has been successfully applied in many domains such as gene expression analysis [24,[27][28][29]. Nevertheless, we have not found any literature about applying stability selection to a Cox model.…”
Section: Introductionmentioning
confidence: 99%