2021
DOI: 10.48550/arxiv.2107.03119
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Variable selection in convex quantile regression: L1-norm or L0-norm regularization?

Abstract: The curse of dimensionality is a recognized challenge in nonparametric estimation. This paper develops a new L 0 -norm regularization approach to the convex quantile and expectile regressions for subset variable selection. We show how to use mixed integer programming to solve the proposed L 0 -norm regularization approach in practice and build a link to the commonly used L 1 -norm regularization approach. A Monte Carlo study is performed to compare the finite sample performances of the proposed L 0 -penalized … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 40 publications
0
6
0
Order By: Relevance
“…Furthermore, while the estimated quantile function, Qy i , is always unique in the CER estimation, the feasible set of problem (4) could be unbounded. That is, there may exist multiple combinations of shadow prices βij ) leading to the same optimal value of the objective function (Dai, 2021). The non-unique estimates in both CQR and CER may further cause a longstanding problem of quantile crossing in quantile estimation (Dai et al, 2022).…”
Section: Convex Expectile Regressionmentioning
confidence: 99%
“…Furthermore, while the estimated quantile function, Qy i , is always unique in the CER estimation, the feasible set of problem (4) could be unbounded. That is, there may exist multiple combinations of shadow prices βij ) leading to the same optimal value of the objective function (Dai, 2021). The non-unique estimates in both CQR and CER may further cause a longstanding problem of quantile crossing in quantile estimation (Dai et al, 2022).…”
Section: Convex Expectile Regressionmentioning
confidence: 99%
“…By deriving an explicit piece-wise linear characterization for regression function, Kuosmanen (2008) introduces convex nonparametric least squares (CNLS), a multivariate convex regression approach, which later attracts great interest in econometrics, statistics, operations research, and machine learning (e.g., Magnani & Boyd, 2009;Seijo & Sen, 2011;Lim & Glynn, 2012;Hannah & Dunson, 2013;Yagi et al, 2020;Bertsimas & Mundru, 2021). The recent studies by Dai (2021) and Dai et al (2022) impose additional regularization on convex regression to address its various intrinsic problems.…”
Section: Motivationmentioning
confidence: 99%
“…Second, the curse of dimensionality is an acknowledged challenge in nonparametric estimation. To mitigate the effects of dimensionality, Dai (2021) introduces two different penalty functions, L 1 and L 0 norms, to CQR and CER. The L 1 -CQR approach is formulated as…”
Section: Monotonic and Concavementioning
confidence: 99%
See 2 more Smart Citations