2023 IEEE Statistical Signal Processing Workshop (SSP) 2023
DOI: 10.1109/ssp53291.2023.10208080
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Quantile Regression with Non-Convex Sparse Penalties

Reza Mirzaeifard,
Vinay Chakravarthi Gogineni,
Naveen K. D. Venkategowda
et al.

Abstract: The surge in data generated by IoT sensors has increased the need for scalable and efficient data analysis methods, particularly for robust algorithms like quantile regression, which can be tailored to meet a variety of situations, including nonlinear relationships, distributions with heavy tails, and outliers. This paper presents a sub-gradient-based algorithm for distributed quantile regression with non-convex, and non-smooth sparse penalties such as the Minimax Concave Penalty (MCP) and Smoothly Clipped Abs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…This section presents a comprehensive simulation study to evaluate the performance of the proposed smoothing timeincreasing ADMM (SIAD) algorithm in the context of sparse quantile regression. We compare the SIAD algorithm with existing state-of-the-art approaches, including QICD [27], LPA [18], LSCD [18], also sub-gradient method (SUB) [30]. The performance of these algorithms is assessed in terms of convergence rate, efficiency in terms of mean square error (MSE), and accuracy in recognizing active and non-active coefficients.…”
Section: Simulation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This section presents a comprehensive simulation study to evaluate the performance of the proposed smoothing timeincreasing ADMM (SIAD) algorithm in the context of sparse quantile regression. We compare the SIAD algorithm with existing state-of-the-art approaches, including QICD [27], LPA [18], LSCD [18], also sub-gradient method (SUB) [30]. The performance of these algorithms is assessed in terms of convergence rate, efficiency in terms of mean square error (MSE), and accuracy in recognizing active and non-active coefficients.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…A recently proposed sub-gradient algorithm, designed for weakly convex functions, achieves a convergence rate such that, after K iterations, it converges to a O(K − 1 4 )-stationary point, based on the derivative of the Moreau-envelope function [29]. This algorithm can be adapted for quantile regression penalized with MCP or SCAD [30], but the result depends on the step size, and the convergence speed might not be efficient. Thus, there is ongoing research to find more robust and efficient solutions for non-smooth, non-convex optimization problems.…”
Section: Introductionmentioning
confidence: 99%
“…In this section, we present a comprehensive simulation study to evaluate the performance of the proposed smoothing timeincreasing penalty ADMM (SIAD) algorithm in the context of sparse quantile regression. We compare the SIAD algorithm with existing state-of-the-art approaches, including QICD [27], LPA [18], LSCD [18], also sub-gradient method [30]. The performance of these algorithms is assessed in terms of convergence rate, efficiency in terms of mean square error (MSE), and accuracy in recognizing active and non-active coefficients.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…A recently proposed sub-gradient algorithm can handle weakly convex functions, achieving a convergence rate of O(K − 1 4 ) to the 1 K -stationary point based on the derivative of Moreau-envelope function [29]. This algorithm can be adapted for quantile regression penalized with MCP or SCAD [30], but the result depends on the step size, and the convergence speed might not be efficient. Thus, there is ongoing research to find more robust and efficient solutions for non-smooth, non-convex optimization problems.…”
Section: Introductionmentioning
confidence: 99%