2013
DOI: 10.1137/120863290
|View full text |Cite
|
Sign up to set email alerts
|

Augmented $\ell_1$ and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

Abstract: This paper studies the long-existing idea of adding a nice smooth function to "smooth" a nondifferentiable objective function in the context of sparse optimization, in particular, the minimization of x 1 + 1 2α x 2 2 , where x is a vector, as well as the minimization of X * + 1 2α X 2 F , where X is a matrix and X * and X F are the nuclear and Frobenius norms of X, respectively. We show that they let sparse vectors and low-rank matrices be efficiently recovered. In particular, they enjoy exact and stable recov… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
99
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 90 publications
(101 citation statements)
references
References 40 publications
1
99
0
Order By: Relevance
“…For the constant smoothing case, we choose the smoothing parameter µ = 0.1 based on the recommendation in [27] for the noiseless case. It is common, however, to see much smaller choices of µ; see [28], [29].…”
Section: Numerical Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the constant smoothing case, we choose the smoothing parameter µ = 0.1 based on the recommendation in [27] for the noiseless case. It is common, however, to see much smaller choices of µ; see [28], [29].…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…Again, current practice dictates using a smoothing parameter that has no dependence on the sample size m; see [31], for example. In our tests, we choose the baseline smoothing parameter µ = 0.1 recommended by [27]. As before, we compare the constant smoothing, constant risk, and balanced (α = 0.9) schemes.…”
Section: B Calculating the Statistical Dimensionmentioning
confidence: 99%
“…It has been shown in [47,76] that the SVT algorithm with a finite τ can get the perfect matrix completion as (2) does. The SVT algorithm is reformulated as Uzawa's algorithm [4] or linearized Bregman iteration [9,10,58,73].…”
mentioning
confidence: 99%
“…In such a setting, dual gradient descent methods [40] can offer advantageous convergence behavior, but these exploit that iterates for the dual variable are in a relatively small linear space for completion problems and are thus not appropriate in our situation. In tensor completion, as a replacement of the nuclear norm in the matrix case, the sum of nuclear norms of matricizations has been proposed [32,41]; however, with this approach, one does not recover the particular properties of the matrix case.…”
Section: Relation To Previous Workmentioning
confidence: 99%