2019
DOI: 10.1109/tnnls.2018.2885972
|View full text |Cite
|
Sign up to set email alerts
|

Regularizing Deep Neural Networks by Enhancing Diversity in Feature Extraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
28
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 65 publications
(30 citation statements)
references
References 20 publications
1
28
1
Order By: Relevance
“…Recently, deep learning algorithms have made a series of revolutions in the field of machine learning (Huang et al 2019) since the classification capability of a neural network to fit a decision boundary plane has become significantly more reliable (LeCun et al 2015) which can successfully learn and extract patterns and unique features from big data (Ayinde et al 2019). Deep learning also can effectively avoid local optimization and eliminates the need to set model parameters because of autonomous processes (Zhang et al 2017).…”
Section: Introductionmentioning
confidence: 99%
“…Recently, deep learning algorithms have made a series of revolutions in the field of machine learning (Huang et al 2019) since the classification capability of a neural network to fit a decision boundary plane has become significantly more reliable (LeCun et al 2015) which can successfully learn and extract patterns and unique features from big data (Ayinde et al 2019). Deep learning also can effectively avoid local optimization and eliminates the need to set model parameters because of autonomous processes (Zhang et al 2017).…”
Section: Introductionmentioning
confidence: 99%
“…We should note that the dictionary learning in sparse coding (1) and AE (4) differ by two aspects. Firstly, the reconstruction error (4) involves mapping of data into itself by two matrices W 1 and W 2 , while the same error being the first term of (1) involves one matrix Φ. Secondly, (1) is solved by optimization, while (4) is based on unsupervised learning of h.…”
Section: Dictionary Learning Via Constrained Autoencodersmentioning
confidence: 99%
“…41 shows the distribution of pairwise correlation of first hidden layer features ( Test error(%) on MNIST. Source: [4] As mentioned earlier, a crucial step in achieving good performance with divReg-2 is not only in the choice of τ * but also in the initialization of adaptive dropout fraction α. Figs.…”
Section: Feature Evolution During Trainingmentioning
confidence: 99%
See 2 more Smart Citations