2020
DOI: 10.1109/tit.2020.3013015
|View full text |Cite
|
Sign up to set email alerts
|

Ultrahigh-Dimensional Robust and Efficient Sparse Regression Using Non-Concave Penalized Density Power Divergence

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 17 publications
(18 citation statements)
references
References 45 publications
0
18
0
Order By: Relevance
“…39 In particular, under appropriate assumptions, we have proved the following asymptotic properties of the DPD-SIS.(R1) At the population level, for any j=1,,p, the j -th marginal MDPDE functional corresponding to the j -th covariate X j is zero if and only if X j is uncorrelated with the response variable.(R2) For some optimally chosen convergent sequence Rn0, we have for some constants κ>0 and λ>0. The first result provides the sure screening property of the DPD-SIS, whereas the second one proves its control of the false-discovery rate.(R3) Combining above results in (R2), for any α0, we have Ptrue(scriptMtrue^=scriptM0true)=1o(1), i.e., DPD-SIS with any given α0 satisfies the model selection consistency. Finally, based on the above sure screening property of the DPD-SIS, and the consistency of the DPD-based penalized regression estimators from Ghosh and Majumdar, 24 we can easily argue the consistency of the final estimator obtained by the proposed DPD-SIS Algorithm 1. A more general result in this regard is presented in the following theorem, justifying the use of the DPD-SIS Algorithm 1 for ultra-high dimensional linear regression problems. Theorem A.1 Assume the conditions required for Results (R2) and the conditions of Theorem 4 of Ghosh and Majumdar [ 24 ] hold true for a given α…”
mentioning
confidence: 76%
See 4 more Smart Citations
“…39 In particular, under appropriate assumptions, we have proved the following asymptotic properties of the DPD-SIS.(R1) At the population level, for any j=1,,p, the j -th marginal MDPDE functional corresponding to the j -th covariate X j is zero if and only if X j is uncorrelated with the response variable.(R2) For some optimally chosen convergent sequence Rn0, we have for some constants κ>0 and λ>0. The first result provides the sure screening property of the DPD-SIS, whereas the second one proves its control of the false-discovery rate.(R3) Combining above results in (R2), for any α0, we have Ptrue(scriptMtrue^=scriptM0true)=1o(1), i.e., DPD-SIS with any given α0 satisfies the model selection consistency. Finally, based on the above sure screening property of the DPD-SIS, and the consistency of the DPD-based penalized regression estimators from Ghosh and Majumdar, 24 we can easily argue the consistency of the final estimator obtained by the proposed DPD-SIS Algorithm 1. A more general result in this regard is presented in the following theorem, justifying the use of the DPD-SIS Algorithm 1 for ultra-high dimensional linear regression problems. Theorem A.1 Assume the conditions required for Results (R2) and the conditions of Theorem 4 of Ghosh and Majumdar [ 24 ] hold true for a given α…”
mentioning
confidence: 76%
“…Set r k = j , if |trueβ^jMα| has rank k , for k=1,,p. Construct the estimated model set scriptMtrue^α(d)={r1,,rd}, with indices corresponding to the top d values of (absolute) marginal MDPDEs. Run a robust penalized regression model (low or moderate dimensional) with the covariates selected in scriptMtrue^α(d) to obtain an estimated coefficient vector, say bold-italicβtrue^d=(trueβ^d0,trueβ^dr1,,trueβ^drd)T. (We suggest to use the DPD-based method of Ghosh and Majumdar 24 with the same α , which also gives an estimate trueσ^2 of the overall model error variance σ2. ) Output: The final estimated model …”
Section: Proposed Robust Variable Screening Proceduresmentioning
confidence: 99%
See 3 more Smart Citations