2019
DOI: 10.1007/s12559-019-09634-2
|View full text |Cite
|
Sign up to set email alerts
|

Interpreting Recurrent Neural Networks Behaviour via Excitable Network Attractors

Abstract: Introduction: Machine learning provides fundamental tools both for scientific research and for the development of technologies with significant impact on society. It provides methods that facilitate the discovery of regularities in data and that give predictions without explicit knowledge of the rules governing a system. However, a price is paid for exploiting such flexibility: machine learning methods are typically black-boxes where it is difficult to fully understand what the machine is doing or how it is op… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
38
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 47 publications
(42 citation statements)
references
References 59 publications
0
38
0
Order By: Relevance
“…Here, we analyze the ability of the proposed model (2) to retain memory of past inputs in the state. In order to perform such analysis, we define a measure of network's memory that quantifies the impact of past inputs on current state x n .…”
Section: Memory Of Past Inputsmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, we analyze the ability of the proposed model (2) to retain memory of past inputs in the state. In order to perform such analysis, we define a measure of network's memory that quantifies the impact of past inputs on current state x n .…”
Section: Memory Of Past Inputsmentioning
confidence: 99%
“…For linear and nonlinear networks, the input scaling (a constant scaling factor of the input signal) is fixed to 1 and the SR equals ρ = 0.95. For the proposed model (2), the input scaling is chosen to be 0.01, while the SR is ρ = 15. For the sake of simplicity, in what follows we refer to ESNs resulting from the use of (2) as "spherical reservoir".…”
Section: Performance On Memory Tasksmentioning
confidence: 99%
“…In a similar approach, recent theoretical work on the behavior of RNNs has introduced the concept of excitable network attractors, which are characterized by stable states of a system connected by excitable connections (Ceni et al, 2019). The conceptual idea of orbits between fixed points may further be implemented in different ways.…”
Section: Heteroclinic Trajectoriesmentioning
confidence: 99%
“…For a review on RC training (including issues of reservoir design discussed in the next section) see [54] and other works [55,56]. Also, the computational capacity of RC can be boosted if a (trainable) feedback is added from the linear readouts to the reservoir [57][58][59][60][61]. This allows for more agile context-dependent computations and longerterm memory [57].…”
Section: Computational Aspects Of Reservoir Computingmentioning
confidence: 99%