2018
DOI: 10.1007/978-3-030-01258-8_15
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Incremental Learning

Abstract: Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model-a requirement that becomes easily unsustainable as the number of classes grows. We addres… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
470
0
3

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 635 publications
(474 citation statements)
references
References 27 publications
1
470
0
3
Order By: Relevance
“…The goal is to build an efficient algorithm that takes as input (a) a model, M t that has been trained on the first t units of data {D 1 , · · · , D t } and (b) new data, D t+1 , to then output an updated model M t+1 . The efficiency of the algorithm can be achieved without direct access to the previous training data, {D 1 , · · · , D t }, while performance is maintained without catastrophically forgetting the previous model [16]. Ensemble learning algorithms can be used to achieve incremental learning.…”
Section: Related Work On Incremental Learningmentioning
confidence: 99%
See 3 more Smart Citations
“…The goal is to build an efficient algorithm that takes as input (a) a model, M t that has been trained on the first t units of data {D 1 , · · · , D t } and (b) new data, D t+1 , to then output an updated model M t+1 . The efficiency of the algorithm can be achieved without direct access to the previous training data, {D 1 , · · · , D t }, while performance is maintained without catastrophically forgetting the previous model [16]. Ensemble learning algorithms can be used to achieve incremental learning.…”
Section: Related Work On Incremental Learningmentioning
confidence: 99%
“…Then, for each new batch of data, a SVM is trained on the new data and the support vectors from the previous learning step [18]. Deep learning approaches have been producing state-of-the-art results yet they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new data added incrementally [16]. [15] utilize Bayesian approach to update the deep learning model with new data.…”
Section: Related Work On Incremental Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…However, multitask models could help improve generalizability when introduced as an intermediary. Lifelong learning has been used to help models learn from separate but related tasks in stages or continuously and could be used to facilitate multitask training . Lifelong learning protocols have been used to transfer knowledge acquired on old tasks to new ones to improve generalization and facilitate model convergence.…”
Section: Introductionmentioning
confidence: 99%