2020
DOI: 10.48550/arxiv.2012.01978
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Asymptotic convergence rate of Dropout on shallow linear neural networks

Abstract: We analyze the convergence rate of gradient flows on objective functions induced by Dropout and Dropconnect, when applying them to shallow linear Neural Networks (NNs)-which can also be viewed as doing matrix factorization using a particular regularizer. Dropout algorithms such as these are thus regularization techniques that use {0, 1}-valued random variables to filter weights during training in order to avoid coadaptation of features. By leveraging a recent result on nonconvex optimization and conducting a c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 16 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?