2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01226
|View full text |Cite
|
Sign up to set email alerts
|

Mnemonics Training: Multi-Class Incremental Learning Without Forgetting

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
234
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 259 publications
(253 citation statements)
references
References 13 publications
1
234
0
Order By: Relevance
“…Due to the limited data quantity and uneven distribution of the phenol concentrations, this challenge cannot be fully addressed without expanding the data samples. Two possible solutions can be applied to tackle this particular issue, one is to enrich the phenol concentrations of the collected barley samples, and the other is to further refine the developed machine learning models, such as the incremental learning [55,56], for more robust modelling and prediction even with new categories of data.…”
Section: Discussionmentioning
confidence: 99%
“…Due to the limited data quantity and uneven distribution of the phenol concentrations, this challenge cannot be fully addressed without expanding the data samples. Two possible solutions can be applied to tackle this particular issue, one is to enrich the phenol concentrations of the collected barley samples, and the other is to further refine the developed machine learning models, such as the incremental learning [55,56], for more robust modelling and prediction even with new categories of data.…”
Section: Discussionmentioning
confidence: 99%
“…2. Our method uses a multiheaded network architecture that has one head per task, which is a common architecture in lifelong learning [Li and Hoiem, 2017;Chaudhry et al, 2018a]. We first introduce mnemonic code, which is the key to solving the LSF problem, and then we present loss functions for learning our model.…”
Section: Methodsmentioning
confidence: 99%
“…Regularization: This approach leverages the previous tasks' knowledge implicitly by introducing additional regularization terms. This approach can be grouped into data-drivenbased [Li and Hoiem, 2017;Hou et al, 2018;Dhar et al, 2019] andweight-constrain-based [Kirkpatrick et al, 2017;Zenke et al, 2017;Aljundi et al, 2018;Chaudhry et al, 2018a;Lee et al, 2017;Yu et al, 2020]. The former utilizes the knowledge distillation, while the latter introduces a prior on the model parameters.…”
Section: Memory-replaymentioning
confidence: 99%
See 2 more Smart Citations