2019
DOI: 10.1038/s42256-019-0080-x
|View full text |Cite
|
Sign up to set email alerts
|

Continual learning of context-dependent processing in neural networks

Abstract: Deep artificial neural networks (DNNs) are powerful tools for recognition and classification as they learn sophisticated mapping rules between the inputs and the outputs. However, the rules that learned by the majority of current DNNs used for pattern recognition are largely fixed and do not vary with different conditions. This limits the network's ability to work in more complex and dynamical situations in which the mapping rules themselves are not fixed but constantly change according to contexts, such as di… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
159
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 184 publications
(164 citation statements)
references
References 51 publications
0
159
0
Order By: Relevance
“…The OWM method was proposed by Zeng et al [ 51 ]. It protected previously learned knowledge by constraining the updated direction of the parameter weights.…”
Section: Methods Descriptionmentioning
confidence: 99%
See 1 more Smart Citation
“…The OWM method was proposed by Zeng et al [ 51 ]. It protected previously learned knowledge by constraining the updated direction of the parameter weights.…”
Section: Methods Descriptionmentioning
confidence: 99%
“…Image samples in this dataset are not fixed-size. ImageNet is used in [ 16 , 17 , 51 , 70 , 71 , 76 , 89 , 95 , 96 ].…”
Section: Datasetsmentioning
confidence: 99%
“…148 It has been shown that input created from a generative model can indeed assist with learning, such as preventing catastrophic forgetting. 149 It should be noted that in most these cases the generative model exists outside the network itself, which is not biologically realistic in the case of the brain, although there are some exceptions. 150 What of those cases where the network itself acts the generative model?…”
Section: The Overfitted Brain Hypothesismentioning
confidence: 99%
“…A research team from the Institute of Automation, Chinese Academy of Sciences, recently used the idea of orthogonal weights to fully protect the previously acquired information when gradient optimization is performed on deep networks. The team demonstrated a good lifelong learning ability in applications such as handwriting and face recognitions [134]. However, most of the current studies are still limited to single perception modality.…”
Section: Embodied Tactile Learningmentioning
confidence: 99%