2019
DOI: 10.1007/s11634-019-00375-6
|View full text |Cite
|
Sign up to set email alerts
|

Sparse classification with paired covariates

Abstract: This paper introduces the paired lasso: a generalisation of the lasso for paired covariate settings. Our aim is to predict a single response from two high-dimensional covariate sets. We assume a one-to-one correspondence between the covariate sets, with each covariate in one set forming a pair with a covariate in the other set. Paired covariates arise, for example, when two transformations of the same data are available. It is often unknown which of the two covariate sets leads to better predictions, or whethe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 27 publications
0
9
0
Order By: Relevance
“…Finally, for many biomedical applications, natural structures among features or complementary information on the features can be exploited as an additional information source for model building. For example, among causally related features, we might want to prioritize the selection of upstream over downstream features in a known causal graph [ 100 ] to account for pairs or groups of functionally related features [ 101 , 102 ] or to transfer information from previous studies (i.e., prior weights or prior effects) into the learning procedure. These approaches to integrate prior knowledge into the learning phase have the potential to render models more predictive and more interpretable.…”
Section: Tip 6: Optimize Model Parameters and Feature Selection Witho...mentioning
confidence: 99%
“…Finally, for many biomedical applications, natural structures among features or complementary information on the features can be exploited as an additional information source for model building. For example, among causally related features, we might want to prioritize the selection of upstream over downstream features in a known causal graph [ 100 ] to account for pairs or groups of functionally related features [ 101 , 102 ] or to transfer information from previous studies (i.e., prior weights or prior effects) into the learning procedure. These approaches to integrate prior knowledge into the learning phase have the potential to render models more predictive and more interpretable.…”
Section: Tip 6: Optimize Model Parameters and Feature Selection Witho...mentioning
confidence: 99%
“…We considered all complete samples from the following tumor types: Mesothelioma (MESO 31 , n = 84; and Sarcoma (SARC 32 , n = 256. Data was preprocessed as described previously in Rauschenberger et al 33 . The comparison procedure for LN and NB distributions is:…”
Section: Comparison Of Mle For Ln and Nb Based On The Generic Deconvomentioning
confidence: 99%
“…Other (large) sets such as breast cancer and ovarian cancer (BRAC and OV) rendered no or a weak signal at best (c-index < 0.6) for any of the methods below, and hence these results were not shown. The data were preprocessed as described in Rauschenberger et al (2019). Further details on the data are given in the Appendix.…”
Section: Parallel Computingmentioning
confidence: 99%
“…As in ordinary regression settings, the best scale to represent a given data type is not known beforehand. In fact, in an adaptive elastic net setting, the joint use of a continuous and binary representation was shown to be potentially beneficial (Rauschenberger et al, 2019) for omics-based tumor classification. Our default multi-ridge (multiridge) allows for including both representations using different penalties to reflect different predictive signal for the two representations.…”
Section: Application Of Paired Multiridgementioning
confidence: 99%
See 1 more Smart Citation