2019
DOI: 10.1109/tsp.2019.2899294
|View full text |Cite
|
Sign up to set email alerts
|

A New Framework to Train Autoencoders Through Non-Smooth Regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
31
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(31 citation statements)
references
References 40 publications
0
31
0
Order By: Relevance
“…Sparse feature learning [14]- [19], L+S regularization of network weights [20] (U-Net trained with L+S output loss)…”
Section: Neural-model-based Mathematical-model-basedmentioning
confidence: 99%
See 2 more Smart Citations
“…Sparse feature learning [14]- [19], L+S regularization of network weights [20] (U-Net trained with L+S output loss)…”
Section: Neural-model-based Mathematical-model-basedmentioning
confidence: 99%
“…There is plenty of room to incorporate sparsity-inducing priors into training as knowledge injection. They have explicitly been utilized for sparsification of hidden unit outputs in autoencoder-based sparse feature learning [14]- [19] as well as for low-rank and/or sparse regularization of network weights [5], [20]. The nuclear norm has not been used as a loss function yet, even though it is backpropable via automatic differentiation of the singular value decomposition [21].…”
Section: None (But Crucial To Prevent Catastrophic Forgetting)mentioning
confidence: 99%
See 1 more Smart Citation
“…Different types of regularization have been proposed to enhance the efficiency of FFNNs. Introduction of sparsity regularizer has successfully improved the initialization quality of deep fully-connected neural networks [9], [10]. Lowcoherence regularization has also shown impressive results in improving CNNs accuracy [11], which was previously explored for dictionary learning [12].…”
Section: Introductionmentioning
confidence: 99%
“…Smooth mathematical regularizers are preferable in the sense that they meet smoothness constraint assumed by many optimization algorithms used to learn network parameters [6]. Recently, a new learning framework has been proposed which aims at eliminating the smoothness constraint for the regularizers [9]. In this framework, instead of smoothness, the regularizer must be proximable which makes many nonsmooth regularizers applicable to the problem of training FFNNs.…”
Section: Introductionmentioning
confidence: 99%