2015 European Control Conference (ECC) 2015
DOI: 10.1109/ecc.2015.7330562
|View full text |Cite
|
Sign up to set email alerts
|

Global convergence of the Heavy-ball method for convex optimization

Abstract: This paper establishes global convergence and provides global bounds of the convergence rate of the Heavy-ball method for convex optimization problems. When the objective function has Lipschitz-continuous gradient, we show that the Cesáro average of the iterates converges to the optimum at a rate of O(1/k) where k is the number of iterations. When the objective function is also strongly convex, we prove that the Heavy-ball iterates converge linearly to the unique optimum.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
191
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 199 publications
(197 citation statements)
references
References 19 publications
5
191
1
Order By: Relevance
“…Our result improves the linear convergence established by Ghadimi, Feyzmahdavian, and Johansson (2015) in two aspects: Firstly, The strongly convex assumption is weakened to (19). Secondly, The step size and inertial parameter are chosen independent of the strongly convex constants.…”
Section: Linear Convergence With Restricted Strong Convexitysupporting
confidence: 69%
“…Our result improves the linear convergence established by Ghadimi, Feyzmahdavian, and Johansson (2015) in two aspects: Firstly, The strongly convex assumption is weakened to (19). Secondly, The step size and inertial parameter are chosen independent of the strongly convex constants.…”
Section: Linear Convergence With Restricted Strong Convexitysupporting
confidence: 69%
“…Global convergence of the Heavy Ball method is established here for function under condition (19) through Lyapunov function (13). The similar results on global convergence for more narrow class of strongly convex functions were obtained in paper [13].…”
Section: Global Convergencesupporting
confidence: 60%
“…Theoretically, it has been proved that HBGD enjoys better convergence factor than both the gradient and Nesterov's accelerated gradient method with linear convergence rates under the condition that the objective function is twice continuously differentiable, strongly convex and has Lipschitz continuous gradient. With the convex and smooth assumption on the objective function, the ergodic O(1/k) rate in terms of the objective value, i.e., f [Ghadimi et al, 2015]. HBGD was proved to converge linearly in the strongly convex case by [Ghadimi et al, 2015].…”
Section: Heavy-ball Algorithmsmentioning
confidence: 99%
“…With the convex and smooth assumption on the objective function, the ergodic O(1/k) rate in terms of the objective value, i.e., f [Ghadimi et al, 2015]. HBGD was proved to converge linearly in the strongly convex case by [Ghadimi et al, 2015]. But the authors used a somehow restrictive assumption on the parameter β, which leads to a small range values of choose for β when the strongly convex constant is tiny.…”
Section: Heavy-ball Algorithmsmentioning
confidence: 99%