2021
DOI: 10.48550/arxiv.2104.00875
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Half-Real Half-Fake Distillation for Class-Incremental Semantic Segmentation

Abstract: Despite their success for semantic segmentation, convolutional neural networks are ill-equipped for incremental learning, i.e., adapting the original segmentation model as new classes are available but the initial training data is not retained. Actually, they are vulnerable to catastrophic forgetting problem. We try to address this issue by "inverting" the trained segmentation network to synthesize input images starting from random noise. To avoid setting detailed pixel-wise segmentation maps as the supervisio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 48 publications
0
3
0
Order By: Relevance
“…Under the Overlapped setting, we can see that the performance of our method consistently outperforms that of other methods by a sizable margin on all evaluated VOC benchmarks (i.e., 19-1, 15-5 and 15-1). On VOC-19-1, the forgetting of old classes (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19) is reduced by 1.91% while performance on new classes is greatly improved by 18.25%. On the most challenging benchmark VOC-15-1, it is worth noting that the performance of our method on all the seen classes outperforms its closest contender PLOP [12] by around 7.48%.…”
Section: Comparison To State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Under the Overlapped setting, we can see that the performance of our method consistently outperforms that of other methods by a sizable margin on all evaluated VOC benchmarks (i.e., 19-1, 15-5 and 15-1). On VOC-19-1, the forgetting of old classes (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19) is reduced by 1.91% while performance on new classes is greatly improved by 18.25%. On the most challenging benchmark VOC-15-1, it is worth noting that the performance of our method on all the seen classes outperforms its closest contender PLOP [12] by around 7.48%.…”
Section: Comparison To State-of-the-art Methodsmentioning
confidence: 99%
“…The latter is more challenging because the task identity is unavailable at inference time. Recently, continual learning has been also explored on several other computer vision tasks, e.g., incremental object detection [23], incremental video classification [48], incremental instance segmentation [15], continual semantic segmentation [3,12,14,18,31,33,39,43,45,52]. Our work focuses on the CSS problem which can be considered as the classincremental learning scenario on semantic segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, RECALL-GAN [28] utilizes a pre-trained generative model to retrieve old knowledge. Huang et al [152] propose to use a pretrained image-generative model to invert the trained segmentation network to synthesize input images from random noise. Besides, pre-trained models can be used as an auxiliary task to boost the CSS task.…”
Section: Regularization-based Mannermentioning
confidence: 99%