2019
DOI: 10.1109/tnnls.2019.2933467
|View full text |Cite
|
Sign up to set email alerts
|

Debiasing and Distributed Estimation for High-Dimensional Quantile Regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 48 publications
(20 citation statements)
references
References 24 publications
0
20
0
Order By: Relevance
“…But this is not applicable when p is very large. Thus, for the high dimensional QR problem, Zhao et al (2014) and Zhao et al (2019) adopted an one-shot averaging method based on the debiased local estimates as that in (4). Accordingly, Chen et al ( 2020) proposed a communication-efficient multiround algorithm inspired by the approximate Newton method (Shamir et al, 2014).…”
Section: Non-smooth Loss Based Modelsmentioning
confidence: 99%
“…But this is not applicable when p is very large. Thus, for the high dimensional QR problem, Zhao et al (2014) and Zhao et al (2019) adopted an one-shot averaging method based on the debiased local estimates as that in (4). Accordingly, Chen et al ( 2020) proposed a communication-efficient multiround algorithm inspired by the approximate Newton method (Shamir et al, 2014).…”
Section: Non-smooth Loss Based Modelsmentioning
confidence: 99%
“…Our paper also makes a novel contribution to the literature on inference for high-dimensional quantile regression. Up until now, this literature is limited to different approaches for debiasing 1 -penalized estimates of the quantile regression vector when the response is homoscedastic (Zhao et al, 2019;Bradic and Kolar, 2017). Our approach differs in two aspects from this literature: First, our inference procedure allows for heteroscedastic responses.…”
Section: Prior and Related Workmentioning
confidence: 99%
“…This is conceptually very different from debiasing a regression vector and the resulting estimator has distinct theoretical properties. In particular, we can derive the limit distribution of the estimated conditional quantile function, whereas this is not possible when using the results from Zhao et al (2019); Bradic and Kolar (2017).…”
Section: Prior and Related Workmentioning
confidence: 99%
“…Recently, distributed sparse linear regression has become increasingly popular; For instance, [17], [18] employ diffusion adaptation to maintain the sparsity of the learned linear models. Generally speaking, a central idea to guarantee sparsity in such models is to use ℓ 1 -regularization; For example, see [19], [20], [21]. The mentioned methods share all the model parameters in the network, making the communication cost be at least the order of data dimension.…”
Section: A Related Workmentioning
confidence: 99%
“…The mentioned methods share all the model parameters in the network, making the communication cost be at least the order of data dimension. Notably, [19] introduced a debiased LASSO model for a network-based setting that comes with some theoretical convergence guarantees. In [20], communication cost is the order of the data dimension, but there is an upper-bound for the steps needed for convergence.…”
Section: A Related Workmentioning
confidence: 99%