2016
DOI: 10.1007/s11590-016-1087-4
|View full text |Cite
|
Sign up to set email alerts
|

On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions

Abstract: We consider the gradient (or steepest) descent method with exact line search applied to a strongly convex function with Lipschitz continuous gradient. We establish the exact worst-case rate of convergence of this scheme, and show that this worst-case behavior is exhibited by a certain convex quadratic function. We also give the tight worst-case complexity bound for a noisy variant of gradient descent method, where exact line-search is performed in a search direction that differs from negative gradient by at mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
84
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 55 publications
(89 citation statements)
references
References 5 publications
5
84
0
Order By: Relevance
“…-In [12, Remark 3.1], Drori remarked that the lower bound for smooth convex unconstrained minimization was achieved by a greedy method (referenced to as the ideal first-order method), -In [6, Section 4.1], de Klerk et al study the worst-case complexity of steepest descent with exact line search applied to strongly convex functions. As suggested by an anonymous referee in [6], the worst-case certificates were also valid for the gradient method with an appropriate fixed-step size.…”
Section: Links With Subspace-search Methodsmentioning
confidence: 95%
See 1 more Smart Citation
“…-In [12, Remark 3.1], Drori remarked that the lower bound for smooth convex unconstrained minimization was achieved by a greedy method (referenced to as the ideal first-order method), -In [6, Section 4.1], de Klerk et al study the worst-case complexity of steepest descent with exact line search applied to strongly convex functions. As suggested by an anonymous referee in [6], the worst-case certificates were also valid for the gradient method with an appropriate fixed-step size.…”
Section: Links With Subspace-search Methodsmentioning
confidence: 95%
“…The analysis below is based on the performance estimation methodology which was first introduced in [13] and has been successfully applied to analyze methods in a wide range of settings, including smooth and nonsmooth minimization [14,22,23,50], proximal gradient methods [48,51], saddle-point problems [11] and more recently to operator splitting methods [43]. Here we build upon an approach developed for the analysis of line-searching methods [6], and improve it by providing a tightness proof under some mild conditions. Clearly, a meaningful analysis can only be attained by making some assumptions on the structure of the problem: namely, that f belongs to some given class of functions F and that the initial point x 0 satisfies some conditions.…”
Section: Estimating the Worst-case Performance Of Gfommentioning
confidence: 99%
“…For the sake of clarity and completeness, we start by proving it using the same technique as for the subsequent results (on residual gradient norm and objective function accuracy). The proof methodology relies from the performance estimation methodology (see [9,10,11,14,17]). This technique has the advantage of being transparent and of explicitly identifying weaker assumptions for obtaining this convergence property (see discussions below Theorem 3.1).…”
Section: Upper Bounds On the Global Convergence Ratesmentioning
confidence: 99%
“…There has been a resurging interest in first-order methods using sDR-oracle [17,7,14]. One of the reasons is that, in practice, small-dimensional relaxation allows local adaptation to the curvature of the objective function, which can dramatically improve the practical convergence rate, the classical example being conjugate gradient methods.…”
Section: Introductionmentioning
confidence: 99%