The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2016
DOI: 10.1109/lsp.2016.2593589
|View full text |Cite
|
Sign up to set email alerts
|

Convergence Rate Analysis of the Majorize–Minimize Subspace Algorithm

Abstract: State-of-the-art methods for solving smooth optimization problems are nonlinear conjugate gradient, low memory BFGS, and Majorize-Minimize (MM) subspace algorithms. The MM subspace algorithm which has been introduced more recently has shown good practical performance when compared with other methods on various optimization problems arising in signal and image processing. However, to the best of our knowledge, no general result exists concerning the theoretical convergence rate of the MM subspace algorithm. Thi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3
1

Relationship

6
3

Authors

Journals

citations
Cited by 22 publications
(25 citation statements)
references
References 27 publications
(47 reference statements)
0
25
0
Order By: Relevance
“…In addition, in a nonstationary context, a theoretical study of the tracking abilities of the algorithm should be conducted. Finally, let us emphasize that a detailed analysis of the convergence rate of the proposed method has been undertaken in our recent paper [72].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, in a nonstationary context, a theoretical study of the tracking abilities of the algorithm should be conducted. Finally, let us emphasize that a detailed analysis of the convergence rate of the proposed method has been undertaken in our recent paper [72].…”
Section: Resultsmentioning
confidence: 99%
“…Based on our recent results in [72], we provide a convergence rate result for Algorithm (20) in the case when the functions (ψ s ) 1 s S are convex and twice differentiable. Then, there exists almost surely n ǫ ∈ N \ {0} such that, for every n n ǫ ,…”
Section: Convergence Ratementioning
confidence: 99%
“…leads to the so-called MM Memory Gradient (3MG) algorithm [9], [10] whose great performances have been assessed in [9], [19]. It is worth noting that the quadratic structure of h makes a solution u k to (5) easy to be determined as:…”
Section: B Subspace Accelerationmentioning
confidence: 99%
“…In order to find a minimizer of F, we propose a Majorize-Minimize (MM) approach, following the ideas in [67,64,68,69,70,71]. At each iteration of an MM algorithm, one constructs a tangent function that majorizes the given cost function and is equal to it at the current iterate.…”
Section: Minimization Algorithm For Fmentioning
confidence: 99%