2019
DOI: 10.1016/j.neunet.2019.04.005
|View full text |Cite
|
Sign up to set email alerts
|

DynMat, a network that can learn after learning

Abstract: To survive in the dynamically-evolving world, we accumulate knowledge and improve our skills based on experience. In the process, gaining new knowledge does not disrupt our vigilance to external stimuli. In other words, our learning process is 'accumulative' and 'online' without interruption. However, despite the recent success, artificial neural networks (ANNs) must be trained offline and suffer catastrophic interference between old and new learning, indicating that ANNs' conventional learning algorithms may … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 48 publications
0
3
0
Order By: Relevance
“…To our knowledge, the use of cascaded architectures as described by Pollack (1987) is not common in modern AI approaches and mirroring the inputs (duplicating the inputs to both the output networks and context networks), as done in our MCN models, is a novel approach. The MCN architecture, when combined with the addition of context inputs, provides a new alternative for solving the common problem of catastrophic interference in neural networks (Beer & Barak, 2019; Ellefsen et al, 2015; Lee, 2019). Future work is needed to explore how mirrored cascaded architectures can be applied to increase efficiency and consistency of modern AI data fitting.…”
Section: Discussionmentioning
confidence: 99%
“…To our knowledge, the use of cascaded architectures as described by Pollack (1987) is not common in modern AI approaches and mirroring the inputs (duplicating the inputs to both the output networks and context networks), as done in our MCN models, is a novel approach. The MCN architecture, when combined with the addition of context inputs, provides a new alternative for solving the common problem of catastrophic interference in neural networks (Beer & Barak, 2019; Ellefsen et al, 2015; Lee, 2019). Future work is needed to explore how mirrored cascaded architectures can be applied to increase efficiency and consistency of modern AI data fitting.…”
Section: Discussionmentioning
confidence: 99%
“…That is, we need effective methods to estimate similarities among HAPs. In doing so, we turn to our earlier short-term memory systems [16].…”
Section: Resultsmentioning
confidence: 99%
“…Our short-term memory systems [16] store novel input patterns (i.e., substantially different from stored input patterns), and their outputs reflect cosine similarities between a present input and stored inputs. If HAPs, evoked by the same class input patterns, are well clustered together, the stored input patterns in the memory systems can approximate input patterns' (i.e., HAPs) distribution.…”
Section: Resultsmentioning
confidence: 99%