2017
DOI: 10.48550/arxiv.1708.06977
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Incremental Learning of Object Detectors without Catastrophic Forgetting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…Different from the standard multi-task learning methods which must jointly learn from the previously observed task data in the offline regime, lifelong learning methods explore how to establish the relationships between the observed tasks with new coming ones, and avoid losing performance among the previously encountered tasks. Generally speaking, by compressing the previous knowledge into a compact knowledge library [2], [17], [38] or storing the knowledge in the learned network weights [22], [36], [51], the major procedure in most existing state-of-the-arts is to transfer knowledge from the current knowledge library [2], [38] to learn a new coming task and congest the fresh knowledge over time; or simple-retrain the deep lifelong learning network via overcoming catastrophic forgetting [22], [34], [39], [51]. Despite the success of lifelong machine learning, the basic assumption for most existing models is that all learned tasks are drawn i.i.d.…”
Section: Introductionmentioning
confidence: 99%
“…Different from the standard multi-task learning methods which must jointly learn from the previously observed task data in the offline regime, lifelong learning methods explore how to establish the relationships between the observed tasks with new coming ones, and avoid losing performance among the previously encountered tasks. Generally speaking, by compressing the previous knowledge into a compact knowledge library [2], [17], [38] or storing the knowledge in the learned network weights [22], [36], [51], the major procedure in most existing state-of-the-arts is to transfer knowledge from the current knowledge library [2], [38] to learn a new coming task and congest the fresh knowledge over time; or simple-retrain the deep lifelong learning network via overcoming catastrophic forgetting [22], [34], [39], [51]. Despite the success of lifelong machine learning, the basic assumption for most existing models is that all learned tasks are drawn i.i.d.…”
Section: Introductionmentioning
confidence: 99%
“…The first loss term L f ocal and the second loss term L r eдr are the standard loss functions used in [24] to train RetinaNet for the new classes where Y n represents the ground-truth one-hot classification labels, Ŷn represents the new model's classification output over n new classes, B n represents the ground-truth bounding box coordinates, and Bn represents the predicted bounding box coordinates for the ground-truth objects. 1 The third loss term L dist _cl as is the distillation loss for the classification subnet similar to that defined in [22,34]. Here, Y o is the output of the frozen old model F' for m old classes using the new training data, and Ŷo is the output of the new model F for the old classes.…”
Section: Incremental Learning Methodsmentioning
confidence: 99%
“…One important approach is adding regularization [2,19] to the neural network weights according to their importance to the old classes, i.e., it discourages the change on important weights using a smaller learning rate. The other major research direction is based the knowledge distillation [12], which uses the new data to distill the knowledge (i.e., the network output) from the old model and mimic its behavior (i.e., generate similar output) when training the new model [22,31,34]. Among the related work, only [34] has focused on the object detection problem.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The inner latent variable inference component of LILAC possesses strong similarities to the continual and lifelong learning setting [18]. Many continual and lifelong learning aim to learn a variety of tasks without forgetting previous tasks [30,58,35,3,37,45,49,40,48]. We consider a setting where it is practical to store past experiences in a replay buffer [41,16].…”
Section: Related Workmentioning
confidence: 99%