2015
DOI: 10.1109/tsp.2015.2461515
|View full text |Cite
|
Sign up to set email alerts
|

Forward–Backward Greedy Algorithms for Atomic Norm Regularization

Abstract: Abstract-In many signal processing applications, the aim is to reconstruct a signal that has a simple representation with respect to a certain basis or frame. Fundamental elements of the basis known as "atoms" allow us to define "atomic norms" that can be used to formulate convex regularizations for the reconstruction problem. Efficient algorithms are available to solve these formulations in certain special cases, but an approach that works well for general atomic norms, both in terms of speed and reconstructi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
62
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 62 publications
(62 citation statements)
references
References 43 publications
0
62
0
Order By: Relevance
“…Regularizing with an atomic gauge thus favors solutions that are sparse combinations of atoms, which motivated the use of algorithms that exploit the sparsity of the solution computationally [33,59]. It is clear from previous definitions that Lovász extensions are atomic gauges.…”
Section: Resultsmentioning
confidence: 99%
“…Regularizing with an atomic gauge thus favors solutions that are sparse combinations of atoms, which motivated the use of algorithms that exploit the sparsity of the solution computationally [33,59]. It is clear from previous definitions that Lovász extensions are atomic gauges.…”
Section: Resultsmentioning
confidence: 99%
“…One example of particular interest that has been studied in the context of the CGM is matrix completion [25,38,20,49]. In this case, the (b) step reduces to computing the leading singular vectors of a sparse matrix.…”
Section: Interface and Implementationmentioning
confidence: 99%
“…Indeed, it is well known that any modification of the iterate that decreases the objective function will not hurt theoretical convergence rates [24]. Moreover, Rao et al [38] have proposed a version of the conditional gradient method, called CoGENT, for atomic norm problems that take advantage of many common structures that arise in inverse problems. The reduction described in our theoretical analysis makes it clear that our algorithm can be seen as an instance of CoGENT specialized to the case of measures and differentiable measurement models.…”
Section: Related Workmentioning
confidence: 99%
“…Quantifying the discretization error incurred by solving`1-norm minimization on a fine grid instead of solving the continuous TV norm minimization problem, in the spirit of [31]. Developing algorithms for TV norm minimization on a continuous domain; see [8,10,56] for some recent work in this direction. Extending our proof techniques to analyze deconvolution in multiple dimensions.…”
Section: Conclusion and Directions For Future Researchmentioning
confidence: 99%