2000
DOI: 10.1080/10618600.2000.10474858
|View full text |Cite
|
Sign up to set email alerts
|

Optimization Transfer Using Surrogate Objective Functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
442
0

Year Published

2004
2004
2021
2021

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 438 publications
(442 citation statements)
references
References 24 publications
0
442
0
Order By: Relevance
“…For this reason, we borrow a technique of optimization transfer (see, e.g., [12,29]) and construct surrogate functionals that effectively remove the term K * K f . We first pick a constant C so that K * K < C, and then we define the functional ( f ; a) = C f − a 2 − K f − K a 2 , which depends on an auxiliary element a of H. Because CI − K * K is a strictly positive operator, ( f ; a) is strictly convex in f for any choice of a.…”
Section: The Iterative Algorithm: a Derivation From Surrogate Functiomentioning
confidence: 99%
“…For this reason, we borrow a technique of optimization transfer (see, e.g., [12,29]) and construct surrogate functionals that effectively remove the term K * K f . We first pick a constant C so that K * K < C, and then we define the functional ( f ; a) = C f − a 2 − K f − K a 2 , which depends on an auxiliary element a of H. Because CI − K * K is a strictly positive operator, ( f ; a) is strictly convex in f for any choice of a.…”
Section: The Iterative Algorithm: a Derivation From Surrogate Functiomentioning
confidence: 99%
“…Iteration then results in convergence to the unique maximum. In [29] the MM method was applied to the least absolute deviation regression problem, leading to a viable surrogate for the absolute value penalty. By combining these ideas [8] provides an iterative algorithm for the maximization of the log-likelihood function with sparsity inducing penalty in the multinomial case -this is the method used here in its binomial form.…”
Section: Methods and Theorymentioning
confidence: 99%
“…Minorization-maximization algorithms (MM; Lange et al, 2000), also known as bound optimization algorithms (see, e.g., Salakhutdinov and Roweis, 2003), are extensions of EM that do not require a missing data framework. To maximize l(θ) (e.g., the log-likelihood), we first find a function Q(θ|θ) such that l(θ) ≥ Q(θ|θ) for all θ andθ, with equality when θ =θ.…”
Section: Extensionsmentioning
confidence: 99%
“…Theoretical properties, such as monotonicity and convergence rate, are also analyzed in Section 3. We also discuss how it can be applied to more general algorithms such as the minorization-maximization (or majorization-minimization) algorithm (Lange et al, 2000). Section 4 applies monotonic overrelaxation to several examples including least absolute deviations regression, Poisson inverse problems, and finite mixtures.…”
Section: Introductionmentioning
confidence: 99%