2018
DOI: 10.1080/00401706.2017.1345703
|View full text |Cite
|
Sign up to set email alerts
|

ADMM for High-Dimensional Sparse Penalized Quantile Regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
98
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 96 publications
(101 citation statements)
references
References 42 publications
1
98
0
Order By: Relevance
“…3 Computational Algorithm Koenker and D'Orey (1987) developed parametric linear programming to compute a quantile regression function for all τ ∈ (0, 1). Many algorithms have been recently introduced for high-dimensional sparse penalized quantile regression approaches; see Gu et al (2018) for an overview. For problem (2), there are C t p candidate submodels to fit the data for a given t, where C t p denotes the number of t-combinations from a given set of p elements.…”
Section: Sparse Composite Quantile Regressionmentioning
confidence: 99%
“…3 Computational Algorithm Koenker and D'Orey (1987) developed parametric linear programming to compute a quantile regression function for all τ ∈ (0, 1). Many algorithms have been recently introduced for high-dimensional sparse penalized quantile regression approaches; see Gu et al (2018) for an overview. For problem (2), there are C t p candidate submodels to fit the data for a given t, where C t p denotes the number of t-combinations from a given set of p elements.…”
Section: Sparse Composite Quantile Regressionmentioning
confidence: 99%
“…Comparison with ADMM. Recently, researchers have developed new optimization techniques based on alternating direction method of multiplier (ADMM) for solving QR problems (see, e.g., Yu, Lin and Wang (2017); Gu et al (2018)). We refer the readers to Boyd et al (2011) for more details on ADMM.…”
Section: In Particularmentioning
confidence: 99%
“…Equations (4.7) and (4.9) complete the algorithm for the proposed estimation in a linear expectile model. Note that we add the proximal mapping of ρ τ in the Z step, so we call the algorithm the proximal ADMM algorithm, summarized as follows: Note that the convergence of the proximal ADMM algorithm can be established similarly to Section 3.3 in Gu et al [5]. As discussed in Gu et al [5], the worst case convergence rate of the proximal ADMM algorithm is at least of order 1/t at each communication round, where t is the iteration number.…”
Section: Proximal Admm Algorithmmentioning
confidence: 99%