2020
DOI: 10.48550/arxiv.2006.07543
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GAN Memory with No Forgetting

Abstract: Seeking to address the fundamental issue of memory in lifelong learning, we propose a GAN memory that is capable of realistically remembering a stream of generative processes with no forgetting. Our GAN memory is based on recognizing that one can modulate the "style" of a GAN model to form perceptually-distant targeted generation. Accordingly, we propose to do sequential style modulations atop a well-behaved base GAN model, to form sequential targeted generative models, while simultaneously benefiting from the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 78 publications
0
4
0
Order By: Relevance
“…Hence, they proposed an efficient generative replay based method by integrating the helper GAN model into the main model used for classification. Cong et al [123] proposed a GAN memory to mitigate catastrophic forgetting by learning an adaptive GAN model. They adapted the modified variants of styletransfer techniques to transfer the base GAN model towards each task domain, leading to increased quality of the generated data instances.…”
Section: Generating Previous Data Instancesmentioning
confidence: 99%
“…Hence, they proposed an efficient generative replay based method by integrating the helper GAN model into the main model used for classification. Cong et al [123] proposed a GAN memory to mitigate catastrophic forgetting by learning an adaptive GAN model. They adapted the modified variants of styletransfer techniques to transfer the base GAN model towards each task domain, leading to increased quality of the generated data instances.…”
Section: Generating Previous Data Instancesmentioning
confidence: 99%
“…Transfer Learning. The main idea in transfer learning is to achieve a low generalization risk by adapting a pre-trained model (usually trained on a large-diverse dataset) to a target domain/task by using usually limited data from the same target domain/task (Pan and Yang 2009;Zhao et al 2022;Cong et al 2020;Zhao, Cong, and Carin 2020;Mo, Cho, and Shin 2020). Generally, in discriminative learning, the pre-trained model is adapted in two simple ways (Yosinski et al 2014;Jiang et al 2022): i) Linear-Probing (LP), which freezes the pre-trained network weights and trains the newly added ones (Wu, Zhang, and Ré 2020;Malekzadeh et al 2017;Du et al 2020), and ii) fine-tuning (FT) which continues to train using the entire pre-trained network weights (Cai et al 2019;Guo et al 2019;Abdollahzadeh, Malekzadeh, and Cheung 2021).…”
Section: Related Workmentioning
confidence: 99%
“…These methods have a similar objective to this work in that they promote models with parameter re-use across different domains. Finally, also related is the task of lifelong learning, wherein data is learned in an online fashion and may be subject to domain shift [12,13].…”
Section: Learning Gans On Multiple Domainsmentioning
confidence: 99%