2022
DOI: 10.48550/arxiv.2204.01705
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning to Accelerate by the Methods of Step-size Planning

Abstract: Gradient descent is slow to converge for ill-conditioned problems and non-convex problems. An important technique for acceleration is step-size adaptation. The first part of this paper contains a detailed review of step-size adaptation methods, including Polyak step-size, L4, LossGrad, Adam, IDBD, and Hypergradient descent, and the relation of step-size adaptation to meta-gradient methods. In the second part of this paper, we propose a new class of methods of accelerating gradient descent that have some distin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 49 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?