ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9413500
|View full text |Cite
|
Sign up to set email alerts
|

Inertial Proximal Deep Learning Alternating Minimization for Efficient Neutral Network Training

Abstract: In recent years, the Deep Learning Alternating Minimization (DLAM), which is actually the alternating minimization applied to the penalty form of the deep neutral networks training, has been developed as an alternative algorithm to overcome several drawbacks of Stochastic Gradient Descent (SGD) algorithms. This work develops an improved DLAM by the well-known inertial technique, namely iPDLAM, which predicts a point by linearization of current and last iterates. To obtain further training speed, we apply a war… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…However, [10] do not connect their algorithm to implicit SGD. A more recent variant of this algorithm by Qiao et al [35] show some formal links to proximal operators, an optimization process related to implicit SGD [34] (see appendix B.1). However, they do not interpret the weight updates as performing the proximal update or implicit SGD as we do here with IL.…”
Section: Related Workmentioning
confidence: 99%
“…However, [10] do not connect their algorithm to implicit SGD. A more recent variant of this algorithm by Qiao et al [35] show some formal links to proximal operators, an optimization process related to implicit SGD [34] (see appendix B.1). However, they do not interpret the weight updates as performing the proximal update or implicit SGD as we do here with IL.…”
Section: Related Workmentioning
confidence: 99%