2021
DOI: 10.48550/arxiv.2104.10093
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Class-Incremental Learning with Generative Classifiers

Abstract: Incrementally training deep neural networks to recognize new classes is a challenging problem. Most existing class-incremental learning methods store data or use generative replay, both of which have drawbacks, while 'rehearsal-free' alternatives such as parameter regularization or bias-correction methods do not consistently achieve high performance. Here, we put forward a new strategy for class-incremental learning: generative classification. Rather than directly learning the conditional distribution p(y|x), … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…However, this algorithm employs SI to introduce the concept of parameter importance and thus tune the shared weights Θ across the entire dataset. Finally, in generative classification [18], a separate variational autoencoder (VAE) is trained in order to learn the data distribution for each class. Next, importance sampling is employed to estimate the likelihood of a test sample x under the VAE model of class y denoted as p(x|y), while p(y) can either be assumed to follow a normal distribution or is calculated by counting sample observations, where the Bayesian rule is used for inference.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, this algorithm employs SI to introduce the concept of parameter importance and thus tune the shared weights Θ across the entire dataset. Finally, in generative classification [18], a separate variational autoencoder (VAE) is trained in order to learn the data distribution for each class. Next, importance sampling is employed to estimate the likelihood of a test sample x under the VAE model of class y denoted as p(x|y), while p(y) can either be assumed to follow a normal distribution or is calculated by counting sample observations, where the Bayesian rule is used for inference.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, we compare the performance of our algorithm to the state-of-the-art models [18] in Class-IL scenario, on both Split-MNIST (Figure 8C) and Split-CIFAR-10 (Figure 8D) datasets.…”
Section: Experiments Objectivementioning
confidence: 99%