2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01503
|View full text |Cite
|
Sign up to set email alerts
|

Rectification-based Knowledge Retention for Continual Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(10 citation statements)
references
References 18 publications
0
10
0
Order By: Relevance
“…Singh et al [151] separated the shared and the task-specific components by calibrating the activation maps of each layer with spatial and channel-wise calibration modules which can adapt the model to different tasks. Singh et al [152] further extended [151] to be used for both zero-shot and non-zero-shot continual learning. Verma et al [153] proposed an Efficient Feature Transformation (EFT) to separate the shared and the task-specific features using Efficient Convolution Operations [154].…”
Section: Dynamic Network Architecturesmentioning
confidence: 99%
“…Singh et al [151] separated the shared and the task-specific components by calibrating the activation maps of each layer with spatial and channel-wise calibration modules which can adapt the model to different tasks. Singh et al [152] further extended [151] to be used for both zero-shot and non-zero-shot continual learning. Verma et al [153] proposed an Efficient Feature Transformation (EFT) to separate the shared and the task-specific features using Efficient Convolution Operations [154].…”
Section: Dynamic Network Architecturesmentioning
confidence: 99%
“…Others focus on the capacity of the neural network. One of the research line [46,58,76,77,83,93] is to expand the network architecture while learning new knowledge. Another research line [1,45] explores the sparsity regularization for network parameters, which aims at activating as few neurons as possible for each task.…”
Section: Related Workmentioning
confidence: 99%
“…This is however challenging, as new and old knowledge are entangled together in model parameters, making it extremely difficult to keep the fragile balance of learning new knowledge and keeping old ones. Some other methods [46,58,76,77,83,93] increase the capacity of the model to have a better tradeoff of stability and plasticity, but with the cost of growing memory of the network.…”
Section: Introductionmentioning
confidence: 99%
“…Continual learning aims to balance two trade-offs: rigidity to change and plasticity to adapt so that new data is learned but past data is not forgotten [3,4,5]. Singh et al [6] proposed Recognition-based Knowledge Retention (RKR), a method for modifying the weights and intermediate activations of a network for each task in continual learning. The weight modification is done by adding parameters generated by a generator called Rectification Generator (RG) to the weights of the convolutional layer.…”
Section: Related Workmentioning
confidence: 99%