2020
DOI: 10.1093/biomet/asaa043
|View full text |Cite
|
Sign up to set email alerts
|

Optimal subsampling for quantile regression in big data

Abstract: Summary We investigate optimal subsampling for quantile regression. We derive the asymptotic distribution of a general subsampling estimator and then derive two versions of optimal subsampling probabilities. One version minimizes the trace of the asymptotic variance-covariance matrix for a linearly transformed parameter estimator and the other minimizes that of the original parameter estimator. The former does not depend on the densities of the responses given covariates and is easy to implement… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
44
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 97 publications
(52 citation statements)
references
References 13 publications
0
44
0
Order By: Relevance
“…The main strategy we adopted was to study the asymptotic properties of the Hansen–Hurwitz estimator for the unknown parameter directly from the optimization problem instead of the accompanying estimating equation. In the process of revising our manuscript, we have noticed that Wang & Ma (2020) and Ai et al (2020a) obtain similar results on optimal subsampling for QR in the context of big data. Although we focus only on the usual linear QR models in this article, this strategy applies to any parametric problem involving a non‐differentiable loss function, such as composite QR (Zou & Yuan, 2008) and expectile regression (Newey & Powell, 1987).…”
Section: Discussionmentioning
confidence: 58%
“…The main strategy we adopted was to study the asymptotic properties of the Hansen–Hurwitz estimator for the unknown parameter directly from the optimization problem instead of the accompanying estimating equation. In the process of revising our manuscript, we have noticed that Wang & Ma (2020) and Ai et al (2020a) obtain similar results on optimal subsampling for QR in the context of big data. Although we focus only on the usual linear QR models in this article, this strategy applies to any parametric problem involving a non‐differentiable loss function, such as composite QR (Zou & Yuan, 2008) and expectile regression (Newey & Powell, 1987).…”
Section: Discussionmentioning
confidence: 58%
“…This algorithm is stated in Algorithm 7. It has shown that the rate of convergence of the final estimator is (n 1 R) −1/2 in Wang and Ma (2020). From the result, we know that, at 5% significance level, the seventh covariate (Euclidian distance) is not significant to the model and all others are significant.…”
Section: Optimal Subsampling Methods For Quantile Regressionmentioning
confidence: 90%
“…The adaptive optimal subsampling algorithm for quantile regression was discussed in Wang and Ma (2020). The quantile regression estimates a specified quantile of the response variable conditional on the covariate variable, and has form q τ (y i |x i ) = x T i β, where τ represents that the τ -th quantile of y i given x i is measured.…”
Section: Optimal Subsampling Methods For Quantile Regressionmentioning
confidence: 99%
See 1 more Smart Citation
“…Wang et al 9 provided an information‐based optimal subdata selection approach in the context of linear models. Wang and Ma 10 investigated optimal subsampling for quantile regression. Zhang and Wang 11 proposed a distributed subsampling procedure for big data linear models.…”
Section: Introductionmentioning
confidence: 99%