2011
DOI: 10.1111/j.1467-9868.2010.00764.x
|View full text |Cite
|
Sign up to set email alerts
|

Penalized Composite Quasi-Likelihood for Ultrahigh Dimensional Variable Selection

Abstract: Summary In high-dimensional model selection problems, penalized least-square approaches have been extensively used. This paper addresses the question of both robustness and efficiency of penalized model selection methods, and proposes a data-driven weighted linear combination of convex loss functions, together with weighted L1-penalty. It is completely data-adaptive and does not require prior knowledge of the error distribution. The weighted L1-penalty is used both to ensure the convexity of the penalty term a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
135
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 158 publications
(137 citation statements)
references
References 30 publications
(88 reference statements)
2
135
0
Order By: Relevance
“…For further developments of this methodology in semiparametric settings, see Kai, Li and Zou (2011). To achieve variance-reduction as well as robustness, Bradic, Fan and Wang (2011) introduced a penalized composite quasi-likelihood for ultrahigh dimensional variable selection by combining several convex loss functions, together with a weighted L 1 -penalty. As a common purpose of these methodologies is to reduce estimation variance, we call them the variance-reduction methodologies.…”
Section: Motivation and Existing Methodologiesmentioning
confidence: 99%
See 2 more Smart Citations
“…For further developments of this methodology in semiparametric settings, see Kai, Li and Zou (2011). To achieve variance-reduction as well as robustness, Bradic, Fan and Wang (2011) introduced a penalized composite quasi-likelihood for ultrahigh dimensional variable selection by combining several convex loss functions, together with a weighted L 1 -penalty. As a common purpose of these methodologies is to reduce estimation variance, we call them the variance-reduction methodologies.…”
Section: Motivation and Existing Methodologiesmentioning
confidence: 99%
“…As the results do not have significant difference, we do not report them here. (1) The AWLS estimatorβ and the WCQR estimator of Bradic, Fan and Wang (2011) It is worth pointing out that the simulation result depends the assumption on the distribution of the error term. As shown by a referee, if the error is Gaussian, the OLS is by far the best method in this setting because the basic quantile regression estimators are much worse than OLS in this case.…”
Section: Simulationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Kai et al (2011). Moreover, Bradic et al (2011) propose a general loss function framework for linear models, with a weighted sum of different kinds of loss functions, and the weights are selected to be data driven. Another type of loss considered is in Newey and Powell (1987) corresponding to expectile regression (ER).…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, different types of flexible loss functions are considered in the literature to improve the estimation efficiency, such as, composite quantile regression, [29], [9] and [10]. Moreover, [3] proposed a general loss function framework for linear models, with a weighted sum of different kinds of loss functions, and the weights are selected to be data driven. Another special type of loss considered in [17] corresponds to expectile regression (ER) that is in spirit similar to QR but contains mean regression as its special case.…”
mentioning
confidence: 99%