2019
DOI: 10.1016/j.cam.2018.08.021
|View full text |Cite
|
Sign up to set email alerts
|

Iterative thresholding algorithm based on non-convex method for modified lp-norm regularization minimization

Abstract: Recently, the l p -norm regularization minimization problem (P λ p ) has attracted great attention in compressed sensing. However, the l p -norm x p p in problem (P λ p ) is nonconvex and non-Lipschitz for all p ∈ (0, 1), and there are not many optimization theories and methods are proposed to solve this problem. In fact, it is NP-hard for all p ∈ (0, 1) and λ > 0. In this paper, we study two modified l p regularization minimization problems to approximate the NPhard problem (P λ p ). Inspired by the good perf… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 13 publications
0
11
0
Order By: Relevance
“…On the other hand, in solving the ( q )-problem, Cui et al [15] propose to utilize the iterative thresholding (IT) algorithm in finding the global optimal solution of surrogate function. Xu et al [12] design a half thresholding algorithm by thresholding representation theory to solve the ( q )-problem when q = 1/2.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, in solving the ( q )-problem, Cui et al [15] propose to utilize the iterative thresholding (IT) algorithm in finding the global optimal solution of surrogate function. Xu et al [12] design a half thresholding algorithm by thresholding representation theory to solve the ( q )-problem when q = 1/2.…”
Section: Related Workmentioning
confidence: 99%
“…For fair comparison, we followed the same setting as in [15] that the problem dimensions were n = 1024 and m = 256, and the ground-truth x 0 ∈ R n was k-sparse signal, where the non-zero entries followed i.i.d. Gaussian distribution N (0, 1).…”
Section: Parameter Settingmentioning
confidence: 99%
“…Finally, we want to explain the convergence of Algorithm 1. Although the traditional ISTA is designed for convex optimization, some extended convergence proofs for nonconvex problems have been provided in the prior works such as (Cui 2018). Therefore, the adopted optimization process is theoretically guaranteed to converge to a stationary point.…”
Section: Optimizationmentioning
confidence: 99%
“…denotes the p norm of x. There are many methods have been developed to solve such problem [10][11][12][13]. Many studies have shown that using p norm for 0 < p < 1 requires fewer measurements and has much better recovery performance than using 1 norm [14,15].…”
Section: Introductionmentioning
confidence: 99%