2020
DOI: 10.1093/bioinformatics/btaa535
|View full text |Cite
|
Sign up to set email alerts
|

Predictive and interpretable models via the stacked elastic net

Abstract: Motivation Machine learning in the biomedical sciences should ideally provide predictive and interpretable models. When predicting outcomes from clinical or molecular features, applied researchers often want to know which features have effects, whether these effects are positive or negative, and how strong these effects are. Regression analysis includes this information in the coefficients but typically renders less predictive models than more advanced machine learning techniques. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

4
5

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 14 publications
(16 reference statements)
0
8
0
1
Order By: Relevance
“…Elastic net was used for classification task, which was preferred due to its ability to address collinearity and to work successfully with high-dimensional but small datasets (Zou and Hastie, 2005; Rauschenberger et al , 2021). Furthermore, it performs feature selection and deals with grouping effects, thereby rendering classifiers more easily interpretable (Liu et al , 2016).…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Elastic net was used for classification task, which was preferred due to its ability to address collinearity and to work successfully with high-dimensional but small datasets (Zou and Hastie, 2005; Rauschenberger et al , 2021). Furthermore, it performs feature selection and deals with grouping effects, thereby rendering classifiers more easily interpretable (Liu et al , 2016).…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…In high-dimensional problems such as PRS which involve a large number of SNPs, the best tuning parameters may be in the gaps or outside the bounds of the fixed grid being considered 23 . Recent works, in both theory and PRS applications, have demonstrated that ensembling multiple predictors can yield better prediction accuracy than choosing a single best predictor from grid search [24][25][26][27][28][29][30] .…”
Section: Mainmentioning
confidence: 99%
“…26, 2023, and have missing baseline measurements for age, sex, and the first ten PCs, and further restricting to unrelated samples so that no pair has relationship closer than fourth degree93 , we have a total of 318,052 samples for analysis. 40,000 samples were randomly chosen for tuning and validation -with 20,000 samples in each split.…”
mentioning
confidence: 99%
“…Moreover, if cross-validation is applied for both hyperparameter optimization and performance estimation (see Tip 7 ), a nested cross-validation scheme is required, i.e., while an outer cross-validation loop is used for performance estimation, an inner cross-validation loop is used for hyperparameter optimization. An alternative to selecting single hyperparameters by cross-validation is to combine multiple hyperparameters by stacked generalization [ 37 , 95 , 96 ]. Furthermore, predictive models avoiding explicit hyperparameter optimization may be chosen, e.g., random forests [ 97 99 ].…”
Section: Tip 6: Optimize Model Parameters and Feature Selection Witho...mentioning
confidence: 99%