2014
DOI: 10.1109/tsp.2014.2340820
|View full text |Cite
|
Sign up to set email alerts
|

Iterative Concave Rank Approximation for Recovering Low-Rank Matrices

Abstract: In this paper, we propose a new algorithm for recovery of low-rank matrices from compressed linear measurements. The underlying idea of this algorithm is to closely approximate the rank function with a smooth function of singular values, and then minimize the resulting approximation subject to the linear constraints. The accuracy of the approximation is controlled via a scaling parameter δ, where a smaller δ corresponds to a more accurate fitting. The consequent optimization problem for any finite δ is nonconv… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
33
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 27 publications
(34 citation statements)
references
References 38 publications
(98 reference statements)
1
33
0
Order By: Relevance
“…This is because the constraints and the derivatives of the objective functions in the two models are the same when the MM method converges. Further, since the objective function in (15) monotonically decreases [36], we can confirm that u * is a local minimum of (15). This completes the proof.…”
Section: Appendix a Proof Of Theoremsupporting
confidence: 69%
“…This is because the constraints and the derivatives of the objective functions in the two models are the same when the MM method converges. Further, since the objective function in (15) monotonically decreases [36], we can confirm that u * is a local minimum of (15). This completes the proof.…”
Section: Appendix a Proof Of Theoremsupporting
confidence: 69%
“…However, in the underdetermined setting of CS, it is not possible to have a convex program if sparsity is promoted by a nonconvex penalty. As a result, in [22], [23], the authors propose to use a continuation approach to decline the risk of getting trapped in a local solution without providing any theoretical guarantee. Here, we derive a condition for strict convexity, and this strict convexity allows us to guarantee convergence to the unique optimal solution.…”
Section: A Contributionmentioning
confidence: 99%
“…The performance gap can be seen, for example, in the bias, support recovery, and estimation error [27]- [30]. On the other hand, a number of theoretical and experimental results in compressive sensing and low-rank matrix recovery (LMR) frameworks suggests that better approximations of the 0 norm and the matrix rank result in better performances [22], [23], [28], [31]- [35]. These studies inspire the same result in the mean filtering problem.…”
Section: A Motivationmentioning
confidence: 99%
See 2 more Smart Citations