2022 56th Asilomar Conference on Signals, Systems, and Computers 2022
DOI: 10.1109/ieeeconf56349.2022.10052074
|View full text |Cite
|
Sign up to set email alerts
|

Robust Phase Retrieval with Non-Convex Penalties

Abstract: This paper proposes an alternating direction method of multiplier (ADMM) based algorithm for solving the sparse robust phase retrieval with non-convex and non-smooth sparse penalties, such as minimax concave penalty (MCP). The accuracy of the robust phase retrieval, which employs an l1 based estimator to handle outliers, can be improved in a sparse situation by adding a non-convex and non-smooth penalty function, such as MCP, which can provide sparsity with a low bias effect. This problem can be effectively so… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…In light of the proven effectiveness of the ADMM algorithm, its application to quantile regression is appealing. However, implementing ADMM in non-convex scenarios with proven convergence remains challenging since existing non-convex ADMM methods frequently demand either a smooth part or an implicit Lipschitz condition to assure convergence [23,24,31]- [33]. Characteristics like Lipschitz differentiability can be beneficial in regulating the change in the dual update variable in non-convex optimization problems [23].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In light of the proven effectiveness of the ADMM algorithm, its application to quantile regression is appealing. However, implementing ADMM in non-convex scenarios with proven convergence remains challenging since existing non-convex ADMM methods frequently demand either a smooth part or an implicit Lipschitz condition to assure convergence [23,24,31]- [33]. Characteristics like Lipschitz differentiability can be beneficial in regulating the change in the dual update variable in non-convex optimization problems [23].…”
Section: Introductionmentioning
confidence: 99%
“…In scenarios lacking convexity in the objective function, managing the change in the dual update step relative to the primal variables becomes essential for ensuring convergence. Precisely, setting bounds on the change in the dual update step, in accordance with the primal variables, could offer a means for parameter tuning and proof of convergence [23,33]. However, in nonsmooth and non-convex settings, such as sparse penalized quantile regression, the conditions for Lipschitz differentiability or implicit Lipschitz differentiability might not always be satisfied [34].…”
Section: Introductionmentioning
confidence: 99%
“…One solution to this problem is using sparse penalties, such as the minimax concave penalty (MCP) [13] and the smoothly clipped absolute deviation (SCAD) [14], that are capable of intelligently distinguishing between active and inactive coefficients. Although these penalties encourage sparse solutions, they mitigate the bias effect of the l 1 -penalty [15], [16].…”
Section: Introductionmentioning
confidence: 99%
“…In light of the proven effectiveness of the ADMM algorithm, its application to quantile regression is quite appealing. However, implementing ADMM in non-convex scenarios with proven convergence remains challenging since existing nonconvex ADMM methods frequently demand either a smooth part or an implicit Lipschitz condition to assure convergence [23,24,31]- [33]. Characteristics like Lipschitz differentiability can be beneficial in regulating the change in the dual update variable in non-convex optimization problems [23].…”
Section: Introductionmentioning
confidence: 99%