2022
DOI: 10.48550/arxiv.2201.04545
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On generalization bounds for deep networks based on loss surface implicit regularization

Abstract: RIKEN AIP A. The classical statistical learning theory says that fitting too many parameters leads to overfitting and poor performance. That modern deep neural networks generalize well despite a large number of parameters contradicts this finding and constitutes a major unsolved problem towards explaining the success of deep learning. The implicit regularization induced by stochastic gradient descent (SGD) has been regarded to be important, but its specific principle is still unknown. In this work, we study ho… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 30 publications
(43 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?