2007
DOI: 10.1198/073500106000000251
|View full text |Cite
|
Sign up to set email alerts
|

Robust Regression Shrinkage and Consistent Variable Selection Through the LAD-Lasso

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
312
0

Year Published

2007
2007
2021
2021

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 466 publications
(313 citation statements)
references
References 16 publications
1
312
0
Order By: Relevance
“…Similar methods were also developed for Cox's proportional hazard model (Zhang and Lu, 2007), least absolute deviation regression (Wang et al, 2007a), and linear regression with autoregressive residuals (Wang et al, 2007b).…”
Section: Introductionmentioning
confidence: 99%
“…Similar methods were also developed for Cox's proportional hazard model (Zhang and Lu, 2007), least absolute deviation regression (Wang et al, 2007a), and linear regression with autoregressive residuals (Wang et al, 2007b).…”
Section: Introductionmentioning
confidence: 99%
“…The τ 's are regarded as leverage factors, which adjust penalties on the coefficients by taking large values for unimportant covariates and small values for important ones. As for the choice of τ , any consistent estimate of β can be good candidates [18][19][20]. Denote the maximum marginal likelihood estimate (MMLE) ofl n,M byβ.…”
Section: Penalized Marginal Likelihood Methodsmentioning
confidence: 99%
“…Among them, the LASSO [14,15] and the SCAD [16,17] are popular and have shown good performance in practice. More recently, the adaptive-LASSO [18,19] was proposed for linear models and shown to produce parsimonious models more effectively than the LASSO; Zhang and Lu [20] studied the adaptive-LASSO estimator for Cox's proportional hazards models. However, very little work has been done for variable selection in the proportional odds model.…”
Section: Introductionmentioning
confidence: 99%
“…Li and Zhu [6] considered quantile regression with the Lasso penalty and developed its piecewise linear solution path. Wang et al [7] investigated the least absolute deviance (LAD) estimate with adaptive Lasso penalty (LAD-Lasso) and proved its oracle property. Wu and Liu [8] further discussed the oracle properties of the SCAD (smoothly clipped absolute deviation) and adaptive Lasso regularized quantile regression.…”
Section: Introductionmentioning
confidence: 99%