2021
DOI: 10.36227/techrxiv.14363477.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Step size self-adaptation for SGD

Abstract: <p>Convergence and generalization are two crucial aspects of performance in neural networks. When analyzed separately, these properties may lead to contradictory results. Optimizing a convergence rate yields fast training, but does not guarantee the best generalization error. To avoid the conflict, recent studies suggest adopting a moderately large step size for optimizers, but the added value on the performance remains unclear. We propose the LIGHT function with the four configurations which regulate ex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 17 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?