2018
DOI: 10.1007/s10957-018-1272-y
|View full text |Cite
|
Sign up to set email alerts
|

Local Convergence of the Heavy-Ball Method and iPiano for Non-convex Optimization

Abstract: A local convergence result for an abstract descent method is proved. The sequence of iterates is attracted by a local (or global) minimum, stays in its neighborhood and converges within this neighborhood. This result allows algorithms to exploit local properties of the objective function. In particular, the abstract theory in this paper applies to the inertial forward-backward splitting method: iPiano-a generalization of the Heavy-ball method. Moreover, it reveals an equivalence between iPiano and inertial ave… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
20
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 30 publications
(20 citation statements)
references
References 52 publications
(115 reference statements)
0
20
0
Order By: Relevance
“…By incorporating the idea of proximal mapping, the inertial proximal gradient algorithm (iPiano) was proposed in (Ochs et al, 2014), whose convergence in nonconvex case was thoroughly discussed. Locally linear convergence of iPiano and Heavy-ball method was later proved in (Ochs, 2016). In the strongly convex case, the linear convergence was proved for iPiano with fixed β k (Ochs, Brox, and Pock, 2015).…”
Section: Introductionmentioning
confidence: 97%
“…By incorporating the idea of proximal mapping, the inertial proximal gradient algorithm (iPiano) was proposed in (Ochs et al, 2014), whose convergence in nonconvex case was thoroughly discussed. Locally linear convergence of iPiano and Heavy-ball method was later proved in (Ochs, 2016). In the strongly convex case, the linear convergence was proved for iPiano with fixed β k (Ochs, Brox, and Pock, 2015).…”
Section: Introductionmentioning
confidence: 97%
“…In the literature, several accelerations of the Sinkhorn-Knopp algorithm have been proposed, using for instance greedy coordinate descent [13] or screening strategies [14]. In another line of research, the introduction of relaxation variables through heavy ball approaches [15] has recently gained popularity to speed up the convergence of algorithms optimizing convex [16] or non-convex [17,18] problems. In this context, the use of regularized nonlinear accelerations (RNAs) [19][20][21] based on Anderson mixing has led to important numerical improvements, although the global convergence is not guaranteed with such approaches, as is shown further.…”
Section: Accelerations Of the Sinkhorn-knopp Algorithmmentioning
confidence: 99%
“…Such a local property was then widely applied to study the asymptotic convergence behavior of various gradient-based algorithms in nonconvex optimization [Attouch and Bolte, 2009;Bolte et al, 2014;Zhou et al, 2016;2018b]. The KŁ property has also been applied to study convergence properties of accelerated gradient algorithms [Li et al, 2017;Li and Lin, 2015] and heavy-ball algorithms [Ochs, 2018; in nonconvex optimization. Some other works exploited the KŁ property to study the convergence of second-order algorithms in nonconvex optimization, e.g., [Zhou et al, 2018a].…”
Section: Related Workmentioning
confidence: 99%