2008
DOI: 10.1137/070688146
|View full text |Cite
|
Sign up to set email alerts
|

A Class of Inexact Variable Metric Proximal Point Algorithms

Abstract: Abstract. For the problem of solving maximal monotone inclusions, we present a rather general class of algorithms, which contains hybrid inexact proximal point methods as a special case and allows for the use of a variable metric in subproblems. The global convergence and local linear rate of convergence are established under standard assumptions. We demonstrate the advantage of variable metric implementation in the case of solving systems of smooth monotone equations by the proximal Newton method.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
62
0

Year Published

2012
2012
2016
2016

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(67 citation statements)
references
References 33 publications
(40 reference statements)
0
62
0
Order By: Relevance
“…In such methods, the convergence rate is improved by using informations from previous iterates for the construction of the new estimate. Another efficient way to accelerate the convergence of the FB algorithm is based on a variable metric strategy [1,[19][20][21][22][23][24]. The underlying metric of FB is modified at each iteration, giving rise to the so-called Variable Metric Forward-Backward (VMFB) algorithm:…”
Section: Introductionmentioning
confidence: 99%
“…In such methods, the convergence rate is improved by using informations from previous iterates for the construction of the new estimate. Another efficient way to accelerate the convergence of the FB algorithm is based on a variable metric strategy [1,[19][20][21][22][23][24]. The underlying metric of FB is modified at each iteration, giving rise to the so-called Variable Metric Forward-Backward (VMFB) algorithm:…”
Section: Introductionmentioning
confidence: 99%
“…Many decomposition techniques (for monotone problems) are explicitly derived from the proximal point method [29,32] for maximal monotone operators, e.g., [10,41,42,44]. Sometimes the relation to the proximal iterates is less direct, e.g., the methods in [4,11,17,31,43], which were nevertheless more recently generalized and interpreted in [37,27] within the hybrid inexact proximal schemes of [39,30]. As some other decomposition methods, we might mention [22] which employs projection and cutting-plane techniques for certain structured problems, matrix splitting for complementarity problems in [6], and the applications of the latter to stochastic complementarity problems in [35].…”
Section: (1)mentioning
confidence: 99%
“…2.1 below. As for the regularization matrix Q k , it should generally be taken as zero if F (and then also F k , for natural choices) is known to be strongly monotone; if strong monotonicity does not hold then Q k should be positive definite (e.g., a multiple of the identity; but more sophisticated choices may be useful depending on the structure [30]). The notion of acceptable approximate solutions of subproblems is discussed in Sect.…”
Section: The Algorithmic Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to get more efficient proximal algorithms, some authors have proposed the use of variable metric or preconditioning in such algorithms [3,5,6,10,13,15,16].…”
Section: Introductionmentioning
confidence: 99%