2006
DOI: 10.1007/s10589-006-6446-0
|View full text |Cite
|
Sign up to set email alerts
|

Gradient Methods with Adaptive Step-Sizes

Abstract: Motivated by the superlinear behavior of the Barzilai-Borwein (BB) method for two-dimensional quadratics, we propose two gradient methods which adaptively choose a small step-size or a large step-size at each iteration. The small step-size is primarily used to induce a favorable descent direction for the next iteration, while the large step-size is primarily used to produce a sufficient reduction. Although the new algorithms are still linearly convergent in the quadratic case, numerical experiments on some typ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
166
0

Year Published

2007
2007
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 184 publications
(176 citation statements)
references
References 22 publications
0
166
0
Order By: Relevance
“…In order to prove (19)- (20), let us reason by induction on j. Using the triangle inequality, (14) with k = 0, the monotonicity of {Ψ(x (k) )} k∈N and (13) we have…”
Section: Convergence Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In order to prove (19)- (20), let us reason by induction on j. Using the triangle inequality, (14) with k = 0, the monotonicity of {Ψ(x (k) )} k∈N and (13) we have…”
Section: Convergence Resultsmentioning
confidence: 99%
“…Finally, direct use of (16) shows that (20) holds with j = 1. By induction, suppose that (19)- (20) hold for some j ≥ 1.…”
Section: Convergence Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The step size of GP is analytically calculated and adaptively changes during the optimization. 30 The high computational efficiency of this GP-BB method has been demonstrated in Refs. 25 and 30 by numerical experiments.…”
Section: Iia Formulation Of Abocs Frameworkmentioning
confidence: 90%
“…The BB algorithm has been much studied because of its remarkable improvement over the SD and OM algorithms ( [25,4,17] and references therein) and proofs of its convergence can be found in [20]. However, a complete explanation of why this simple modification of the SD algorithm improves its performance considerably has not yet been found, although it has been suggested that the improvement is connected to its nonmonotonic convergence, as well as to the fact that it does not produce iterates that get trapped in a low dimensional subspace [4].…”
Section: Preliminariesmentioning
confidence: 99%