2019
DOI: 10.1101/589564
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Robust Model of Gated Working Memory

Abstract: Gated working memory is defined as the capacity of holding arbitrary information at any time in order to be used at a later time. Based on electrophysiological recordings, several computational models have tackled the problem using dedicated and explicit mechanisms. We propose instead to consider an implicit mechanism based on a random recurrent neural network. We introduce a robust yet simple reservoir model of gated working memory with instantaneous updates. The model is able to store an arbitrary real value… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2
1
1
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 51 publications
0
10
0
Order By: Relevance
“…Moreover, it could also help to understand the influence of the feedback when outputs are fed-back to the reservoir. Given that feeding back outputs into the reservoir can reduce the dimensionality of the itnernal states (especially for symbolic tasks) [52][53][54], studying this effect more deeply with RSSviz would provide more insights on the complex influence of the feedback in general. We believe that studying the combination of both unsupervised rules and feedback could lead to new learning rules for ESNs.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, it could also help to understand the influence of the feedback when outputs are fed-back to the reservoir. Given that feeding back outputs into the reservoir can reduce the dimensionality of the itnernal states (especially for symbolic tasks) [52][53][54], studying this effect more deeply with RSSviz would provide more insights on the complex influence of the feedback in general. We believe that studying the combination of both unsupervised rules and feedback could lead to new learning rules for ESNs.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, online learning proposes a lighter approach to train reservoir with less computational demand while still achieving compatible level of accuracy. More importantly perhaps, online incremental learning methods are crucial for computational neuroscience models [14] and developmental experiments in cognitive science (developmental psychology, robotics, ...) [6,11]. Current implementation of online learning in ReservoirPy is based on FORCE learning method [15], and resides in a separate class: ESNOnline.…”
Section: Precisions On Online Learning Featurementioning
confidence: 99%
“…Several members of our team and students already used it for different tasks and purposes (e.g. to build Computational Neuroscience models and Human-Robot Interaction modules [6,11,14]), it is now time to share it more extensively.…”
Section: Introductionmentioning
confidence: 99%
“…We consider the gating task described in [24] and two additional variants. In this task the model receives an input V that is continuously varying over time and another input being either 0 or 1 (trigger or gate T ).…”
Section: Tasksmentioning
confidence: 99%
“…Based on the original RC paradigm, we have shown in [24] how a reservoir with feedback connections can implement a gated working memory, i.e. a generic mechanism to maintain information at a given time (corresponding to when the gate is on).…”
mentioning
confidence: 99%