2020
DOI: 10.1080/01621459.2019.1705308
|View full text |Cite
|
Sign up to set email alerts
|

Individualized Multidirectional Variable Selection

Abstract: In this paper we propose a heterogeneous modeling framework which achieves individualwise feature selection and individualized covariates' effects subgrouping simultaneously. In contrast to conventional model selection approaches, the new approach constructs a separation penalty with multi-directional shrinkages, which facilitates individualized modeling to distinguish strong signals from noisy ones and selects different relevant variables for different individuals. Meanwhile, the proposed model identifies sub… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 34 publications
(21 citation statements)
references
References 53 publications
(90 reference statements)
0
19
0
Order By: Relevance
“…where h T i is the ith row of H, and h ik ∈ [0, 1] represents the probability that the ith sample belongs to the kth cluster. The penalty term min (|h ik |, |h ik − 1|) is a multi-directional separation penalty (MDSP) [32], which penalizes h ij to either 0 or 1 depending on the magnitude of h ij . The purpose of adding the MDSP penalty is to prevent strong signals from being pulled towards zero in the process of shrinking weak signals for sparsity pursuit, and thus to reduce the uncertainty on the cluster membership of each instance.…”
Section: Metric Learning With Augmented Pairwise Constraintsmentioning
confidence: 99%
“…where h T i is the ith row of H, and h ik ∈ [0, 1] represents the probability that the ith sample belongs to the kth cluster. The penalty term min (|h ik |, |h ik − 1|) is a multi-directional separation penalty (MDSP) [32], which penalizes h ij to either 0 or 1 depending on the magnitude of h ij . The purpose of adding the MDSP penalty is to prevent strong signals from being pulled towards zero in the process of shrinking weak signals for sparsity pursuit, and thus to reduce the uncertainty on the cluster membership of each instance.…”
Section: Metric Learning With Augmented Pairwise Constraintsmentioning
confidence: 99%
“…Wu et al (2012) and Zhao et al (2015), where the used term is often heterogeneous feature selection or sparsification. In contrast to the approach developed in Tang et al (2020), these latter approaches shrink the corresponding unit-specific parameter to zero in case a variable is selected, and not to some underlying population distribution shared across units.…”
Section: Related Literaturementioning
confidence: 99%
“…A number of papers have proposed approaches to accommodate heterogeneous variable selection. They have done so for multivariate linear models (Kim et al, 2009, Tang et al, 2020, multivariate binary probit models (Kim et al, 2018), and multinomial logit models (Gilbride et al, 2006, Scarpa et al, 2009, Hensher and Greene, 2010Hole, 2011, Campbell et al, 2011, Hess et al, 2013, Hole et al, 2013. Few of these papers use a Bayesian approach (Gilbride et al, 2006, Kim et al, 2009, Kim et al, 2018.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The collection of dense longitudinal data makes it feasible to consider each patient to be a subgroup rather than only a member of the population. In other words, individualized modeling is possible since the number of repeated measurements is abundant. As a result, the coefficients of covariates in regression models may be allowed to be different for each of the patients.…”
Section: Introductionmentioning
confidence: 99%