2014
DOI: 10.1080/10556788.2014.936438
|View full text |Cite
|
Sign up to set email alerts
|

Penalty decomposition methods for rank minimization

Abstract: In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems in which each subproblem is solved by a block coordinate des… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
68
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 69 publications
(71 citation statements)
references
References 39 publications
(107 reference statements)
0
68
0
Order By: Relevance
“…The above problem can be solved by any 0 -minimization solver, such as COSAMP [55], difference-of-convex [56], and penalty decomposition [57]. The error bound is presented in the following theorem.…”
Section: B Error Bounds In Uncertainty Quantificationmentioning
confidence: 99%
“…The above problem can be solved by any 0 -minimization solver, such as COSAMP [55], difference-of-convex [56], and penalty decomposition [57]. The error bound is presented in the following theorem.…”
Section: B Error Bounds In Uncertainty Quantificationmentioning
confidence: 99%
“…Hence, when the set Ω is characterizing some structure conflicted with low rank, such as the correlation or density matrix structure, the nuclear norm relaxation method will fail in yielding a low rank solution. In view of this, many researchers recently develop effective solution methods based on the sequential convex relaxation models arising from the penalty problems [7,12], the nonconvex surrogate problems [6,10,18,19], and the rank constrained optimization problem itself [16,26,28]. We notice that to measure the distance from any given point to the feasible set or the solution set plays a key role in the analysis of these methods.…”
Section: Introductionmentioning
confidence: 99%
“…2, the error detection problem with l 0 -norm constraints, can also be solved effectively and efficiently by using [11]: compute a histogram of the entries of E, and then find a threshold value such that the histogram quantile (cumulative sum) above that threshold corresponds to γ out of mn pixels. Then we use that threshold to set other (small) entries (pixels) of E to zero.…”
Section: Algorithm 1: Dfar -Outer Loopmentioning
confidence: 99%