2019
DOI: 10.1214/18-aos1730
|View full text |Cite
|
Sign up to set email alerts
|

Distributed inference for quantile regression processes

Abstract: The increased availability of massive data sets provides a unique opportunity to discover subtle patterns in their distributions, but also imposes overwhelming computational challenges. To fully utilize the information contained in big data, we propose a two-step procedure: (i) estimate conditional quantile functions at different levels in a parallel computing environment; (ii) construct a conditional quantile regression process through projection based on these estimated quantile curves. Our general quantile … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

3
71
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 117 publications
(78 citation statements)
references
References 26 publications
3
71
0
Order By: Relevance
“…For example, in a special case of quantile estimation (i.e., p = 0), it is straightforward to show that √ n| β ndc − β(τ )| → ∞ in probability when n/m 2 → ∞ (see Theorem D.1 in Appendix). Similar phenomenon occurs for general p, see Volgushev, Chao and Cheng (2018). In fact, the local QR estimators β QR,k are biased estimators with the bias O(1/m).…”
supporting
confidence: 56%
See 1 more Smart Citation
“…For example, in a special case of quantile estimation (i.e., p = 0), it is straightforward to show that √ n| β ndc − β(τ )| → ∞ in probability when n/m 2 → ∞ (see Theorem D.1 in Appendix). Similar phenomenon occurs for general p, see Volgushev, Chao and Cheng (2018). In fact, the local QR estimators β QR,k are biased estimators with the bias O(1/m).…”
supporting
confidence: 56%
“…, T N ) for some aggregation function G(·)). In recent years, this DC framework has been widely adopted in distributed statistical inference (see, e.g., Li, Lin and Li (2013); Chen and Xie (2014); Battey et al (2018); Zhao, Cheng and Liu (2016); Shi, Lu and Song (2017); Banerjee, Durot and Sen (2018) ;Volgushev, Chao and Cheng (2018) and Section 2 for detailed descriptions).…”
mentioning
confidence: 99%
“…Proof. The first four statements (11)-(14) follow from Theorem S.2.1 of Volgushev, Chao, and Cheng (2017). More precisely, note that in the notation of Volgushev, Chao, and Cheng (2017) under the assumptions (A1)-(A4) we have g N = 0, c N = 0, ξ m , m are constant.…”
Section: Resultsmentioning
confidence: 87%
“…A key insight in the proofs is a detailed analysis of the expected values of remainder terms in the classical Bahadur representation for QR, while previous approaches (including Kato, Galvao, and Montes-Rojas (2012) and Galvao and Wang (2015)) focused on the stochastic order of those remainder terms. A similar analysis was previously performed in Volgushev, Chao, and Cheng (2017) for general QR models with growing dimension under the assumption of independent and identically distributed observations. The proofs involve subtle empirical process arguments, and extending those results to settings with dependent data requires a substantial amount of work.…”
Section: Introductionmentioning
confidence: 88%
See 1 more Smart Citation