2018
DOI: 10.1109/access.2018.2880454
|View full text |Cite
|
Sign up to set email alerts
|

A Survey on Nonconvex Regularization-Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning

Abstract: In the past decade, sparse and low-rank recovery have drawn much attention in many areas such as signal/image processing, statistics, bioinformatics and machine learning. To achieve sparsity and/or low-rankness inducing, the 1 norm and nuclear norm are of the most popular regularization penalties due to their convexity. While the 1 and nuclear norm are convenient as the related convex optimization problems are usually tractable, it has been shown in many applications that a nonconvex penalty can yield signific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
81
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 136 publications
(83 citation statements)
references
References 225 publications
(336 reference statements)
0
81
0
Order By: Relevance
“… regularization, a nonconvex and non-smooth optimization problem, has been proven to solve more accurate sparse solutions than regularization as a relaxation approach in [ 27 , 28 , 29 ]. Weighted regularization is an approximate representation of regularization [ 30 , 31 ].…”
Section: Methodsmentioning
confidence: 99%
“… regularization, a nonconvex and non-smooth optimization problem, has been proven to solve more accurate sparse solutions than regularization as a relaxation approach in [ 27 , 28 , 29 ]. Weighted regularization is an approximate representation of regularization [ 30 , 31 ].…”
Section: Methodsmentioning
confidence: 99%
“…In contrast, the L 2,1 norm is more stable and has the ability to better preserve the spatial information than the L 1 regularizer, as demonstrated in [13,47]. Additionally, the L 2,1 norm is superior to the nonconvex norms when the signals are not sparse or when the matrix is not strictly low rank [48,49]. e overall problem can thus be posted as an optimization problem given by minimize L,S,Δτ…”
Section: Problem Formulationmentioning
confidence: 99%
“…The dual problem (12) can be solved by well-developed LP solvers. The algorithm is summarized as follows.…”
Section: A 1 Algorithm With Reduced Dimensionmentioning
confidence: 99%
“…or q (0 < q < 1) norm can usually yield a sparser solution than the 1 norm[12]. Empirical results have shown that the 1 minimization (10) is likely to remove inliers in some conditions[6],[24].…”
mentioning
confidence: 99%