2022 30th European Signal Processing Conference (EUSIPCO) 2022
DOI: 10.23919/eusipco55093.2022.9909929
|View full text |Cite
|
Sign up to set email alerts
|

ADMM for Sparse-Penalized Quantile Regression with Non-Convex Penalties

Abstract: This paper investigates quantile regression in the presence of non-convex and non-smooth sparse penalties, such as the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD). The non-smooth and non-convex nature of these problems often leads to convergence difficulties for many algorithms. While iterative techniques like coordinate descent and local linear approximation can facilitate convergence, the process is often slow. This sluggish pace is primarily due to the need to run these appr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

3
0

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 42 publications
0
3
0
Order By: Relevance
“…This can be achieved by adding a penalty function, P λ,γ (w), to the quantile regression loss function. The optimization problem (4) takes a new form after penalizing the loss function as [21]:…”
Section: Preliminariesmentioning
confidence: 99%
See 1 more Smart Citation
“…This can be achieved by adding a penalty function, P λ,γ (w), to the quantile regression loss function. The optimization problem (4) takes a new form after penalizing the loss function as [21]:…”
Section: Preliminariesmentioning
confidence: 99%
“…However, both the LLA and QICD algorithms can be computationally intensive and have slow convergence rates as they suffer from inner loop. To tackle this issue, a more efficient single-loop ADMM algorithm was proposed in [21]. Nevertheless, all the algorithms mentioned above are limited to centralized quantile regression, and there has been little research on using MCP or SCAD penalties in distributed quantile regression.…”
Section: Introductionmentioning
confidence: 99%
“…To overcome this limitation, in this paper, we investigate using an MCP as P λ,ζ (w) = P p=1 g λ,ζ (w p ) [23], which is a non-convex and non-smooth function, to provide sparsity in the estimated signal. The definition of MCP is given by [23]: [24]. Additionally, each l i (.)…”
Section: Problem Formulationmentioning
confidence: 99%
“…While non-convex and non-smooth penalties may improve estimation accuracy in many problems [24,25], because of their non-convexity and non-smoothness, they complicate optimization. For penalized robust penalized phase retrieval, in particular, proposing an optimization algorithm is more challenging due to its non-convex and non-smooth nature.…”
Section: Introductionmentioning
confidence: 99%
“…Precisely, setting bounds on the change in the dual update step, in accordance with the primal variables, could offer a means for parameter tuning and proof of convergence [23,33]. However, in nonsmooth and non-convex settings, such as sparse penalized quantile regression, the conditions for Lipschitz differentiability or implicit Lipschitz differentiability might not always be satisfied [34]. Thus, there is a compelling need arises to develop enhanced ADMM-based optimization methods that can efficiently handle such conditions without relying on these assumptions.…”
Section: Introductionmentioning
confidence: 99%