2018
DOI: 10.1007/978-3-030-01418-6_48
|View full text |Cite
|
Sign up to set email alerts
|

Catastrophic Forgetting: Still a Problem for DNNs

Abstract: We investigate the performance of DNNs when trained on class-incremental visual problems consisting of initial training, followed by retraining with added visual classes. Catastrophic forgetting (CF) behavior is measured using a new evaluation procedure that aims at an application-oriented view of incremental learning. In particular, it imposes that model selection must be performed on the initial dataset alone, as well as demanding that retraining control be performed only using the retraining dataset, as ini… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 7 publications
0
7
0
Order By: Relevance
“…Although DL approaches have shown good results, they continue to suffer from catastrophic forgetting (Pf €ulb et al, 2018), a reduction in the overall performance when training is done on the new classes. This problem arises due to the requirement of having the complete dataset including the samples from old and new classes.…”
Section: Need For Incremental Learningmentioning
confidence: 99%
“…Although DL approaches have shown good results, they continue to suffer from catastrophic forgetting (Pf €ulb et al, 2018), a reduction in the overall performance when training is done on the new classes. This problem arises due to the requirement of having the complete dataset including the samples from old and new classes.…”
Section: Need For Incremental Learningmentioning
confidence: 99%
“…Superficially, the ResNet101 that shows superiority in both parameter scale and number of layers achieves the classification performance (mAP of 80.42%) close to that of DenseNet169 (mAP of 80.99%). However, when the scene images are assigned with the hierarchically multiple labels, the issue of data imbalance becomes prominent, which brings the problem of "catastrophic forgetting" [196], [197]. As a result, the networks show weak performance on categories like stadium and apartment When increasing τ to be 0.75, the OPs of all models gain significant improvement.…”
Section: Resultsmentioning
confidence: 99%
“…Loss spikes arise when artificial neural networks (ANNs) encounter difficult examples and can destabilize training with gradient descent [1,2]. Examples may be difficult because an ANN needs more training to generalize, catastrophically forgot previous learning [3] or because an example is complex or unusual. Whatever the cause, applying gradients backpropagated [4] from high losses results in large perturbations to trainable parameters.…”
Section: Introductionmentioning
confidence: 99%