2021
DOI: 10.48550/arxiv.2106.15287
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Tackling Catastrophic Forgetting and Background Shift in Continual Semantic Segmentation

Arthur Douillard,
Yifu Chen,
Arnaud Dapogny
et al.

Abstract: Deep learning approaches are nowadays ubiquitously used to tackle computer vision tasks such as semantic segmentation, requiring large datasets and substantial computational power. Continual learning for semantic segmentation (CSS) is an emerging trend that consists in updating an old model by sequentially adding new classes. However, continual learning methods are usually prone to catastrophic forgetting. This issue is further aggravated in CSS where, at each step, old classes from previous iterations are col… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 76 publications
0
3
0
Order By: Relevance
“…The similarity between the response of the new model and that of the old model can be measured by cross-entropy loss if the response is the model output as in the LwF method [2] and the iCarL method [1], or more generally by Euclidean distance or cosine distance when the response is the output of single or multiple intermediate layers as in the LwM method [4] and the PODNet method [17]. Such knowledge distillation has been effectively applied to continual semantic segmentation [5], [6], [7], [10]. Besides knowledge distillation, replay strategy has also been proven very helpful for both continual classification [1], [18] and semantic segmentation [12].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The similarity between the response of the new model and that of the old model can be measured by cross-entropy loss if the response is the model output as in the LwF method [2] and the iCarL method [1], or more generally by Euclidean distance or cosine distance when the response is the output of single or multiple intermediate layers as in the LwM method [4] and the PODNet method [17]. Such knowledge distillation has been effectively applied to continual semantic segmentation [5], [6], [7], [10]. Besides knowledge distillation, replay strategy has also been proven very helpful for both continual classification [1], [18] and semantic segmentation [12].…”
Section: Related Workmentioning
confidence: 99%
“…In addition to alleviating the catastrophic forgetting issue, continual semantic segmentation also has to solve the background shifting issue [6], [7], [11], [12], i.e., background regions in images at one learning stage may contain regions of classes learned at another stage. Pseudo-labeling part of background regions as learned old classes from the new training images by the old model at each new learning stage has been shown helpful to alleviate this issue [7], [10], [12]. Aggregating the prediction probabilities of both the background class and new classes from the new class for background class during knowledge distillation can reduce the bias of background prediction between the old and the new model [6].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation