2018
DOI: 10.1080/00949655.2018.1441415
|View full text |Cite
|
Sign up to set email alerts
|

Shrinkage and penalized estimators in weighted least absolute deviations regression models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 22 publications
0
9
0
Order By: Relevance
“…Hossain et al 16 considered shrinkage, pretest, and penalty estimators in generalized linear models when there are many potential predictors. For related works, see the works of Griffin and Brown, 17 Arabi Belaghi et al, 18 Arashi and Roozbeh, 19,20 Shah et al, 21 Ginestet et al, 22 Norouzirad and Arashi, 23,24 Yüzbaşı and Ahmed, 25 and Reangsephet et al 26 Recently, Norouzirad et al 27 developed shrinkage and penalized estimators in weighted least absolute deviations regression models.…”
Section: Methodsmentioning
confidence: 99%
“…Hossain et al 16 considered shrinkage, pretest, and penalty estimators in generalized linear models when there are many potential predictors. For related works, see the works of Griffin and Brown, 17 Arabi Belaghi et al, 18 Arashi and Roozbeh, 19,20 Shah et al, 21 Ginestet et al, 22 Norouzirad and Arashi, 23,24 Yüzbaşı and Ahmed, 25 and Reangsephet et al 26 Recently, Norouzirad et al 27 developed shrinkage and penalized estimators in weighted least absolute deviations regression models.…”
Section: Methodsmentioning
confidence: 99%
“…Therefore, in order to overcome the issue for model (3) when N < v , many recent studies (DeMiguel et al, 2009;Still and Kondor, 2010;Carrasco and Noumon, 2011;Fastrich et al, 2015;Long et al, 2018;Norouzirad et al, 2018) focus on regularization methods such as ridge regression (Hoerl and Kennard, 1970), Least Absolute Shrinkage and Selection Operator (LASSO) (Tibshirani, 2011), Least Angle Regression (LARS) (Efron et al, 2004), Adaptive LASSO (Zou, 2006) and the Dantzig selector (Candes and Tao, 2007). However, all of their estimates are biased giving emphasis to the famous notion of "bias-variance tradeoff", which might over-shrink the coefficients (Radchenko et al, 2011) and perhaps produce inaccurate portfolio weights.…”
Section: Regression With N < V and Its Challengesmentioning
confidence: 99%
“…The author of [ 14 ] developed, upon the procedure of [ 19 ] on model selection in composite quantile regression (CQR) and suggested weighted CQR (WCQR), a procedure based on data driven efficient weights, as the former researcher’s equal weight property procedure lacked optimality. To mitigate against the undesirable effects of high leverage points in variable selection in the estimator ( ), the weighted LAD-LASSO (WLAD-LASSO) procedure has been suggested [ 20 , 21 ]. Few variable selection procedures based on WQR have been suggested in the QR regression framework in different settings.…”
Section: Introductionmentioning
confidence: 99%
“…In summary, the motivations of this study are premised on the following: The generalization of WLAD-LASSO [ 20 , 21 ] (in addition, we also include the RIDGE and the E-NET penalties) procedure to the QR framework, i.e., to penalized WQR as each RQ (including the LAD estimator) is a local measure, unlike the LS estimator, which is a global one. Rather than carrying an "omnibus" study of penalized WQR as in [ 20 , 21 ], we carry out a detailed study by distinguishing different types of high leverage points viz., Collinearity influential points which comprise collinearity inducing and collinearity hiding points. High leverage points which are not collinearity influential.…”
Section: Introductionmentioning
confidence: 99%