2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489462
|View full text |Cite
|
Sign up to set email alerts
|

Concentric ESN: Assessing the Effect of Modularity in Cycle Reservoirs

Abstract: The paper introduces concentric Echo State Network, an approach to design reservoir topologies that tries to bridge the gap between deterministically constructed simple cycle models and deep reservoir computing approaches. We show how to modularize the reservoir into simple unidirectional and concentric cycles with pairwise bidirectional jump connections between adjacent loops. We provide a preliminary experimental assessment showing how concentric reservoirs yield to superior predictive accuracy and memory ca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…On the methodological side, a natural extension of the work in this paper is to analyze the effect of a broader pool of reservoir architectural variants, including e.g. small-world [19], cycles with regular jumps [24] and concentric [1] reservoirs. Moreover, future research could pursue even further the simplification of architectural construction in deep RC models, reducing the impact of randomness in the network initialization in the same vein as the works on minimum complexity ESNs [23,24].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…On the methodological side, a natural extension of the work in this paper is to analyze the effect of a broader pool of reservoir architectural variants, including e.g. small-world [19], cycles with regular jumps [24] and concentric [1] reservoirs. Moreover, future research could pursue even further the simplification of architectural construction in deep RC models, reducing the impact of randomness in the network initialization in the same vein as the works on minimum complexity ESNs [23,24].…”
Section: Discussionmentioning
confidence: 99%
“…In particular, at time-step t, the state of the first layer, i.e. x (1) (t) ∈ R NR , is computed as follows:…”
Section: Deep Echo State Networkmentioning
confidence: 99%
“…Different studies have proposed alternatives to the random structure of the connectivity matrix of ESNs, formulating models of reservoirs with regular graph structures. Examples include a delay line [17], where each node receives and provides information only from the previous node and the following one respectively, and the concentric reservoir proposed in [18], where multiple delay lines are connected to form a concentric structure. Furthermore, the idea of a hierarchical architecture of ESNs, where each ESN is connected to the preceding and following one, has attracted the reservoir computing community for its capability of discovering higher level features of the external signal [34].…”
Section: Hierarchical Echo-state Networkmentioning
confidence: 99%
“…Previous works have studied the dependence of the reservoir properties on the structure of the random connectivity adopted, studying the dependence of the reservoir performance on the parameters defining the random connectivity distribution, and formulating alternatives to the typical Erdos-Renyi graph structure of the network [16][17][18]. In this sense, in [17] a model with a regular graph structure has been proposed, where the nodes are connected forming a circular path with constant shortest path lengths equal to the size of the network, introducing long temporal memory capacity by construction.…”
Section: Introductionmentioning
confidence: 99%
“…Different from general RNNs trained with back-propagation, such as LSTM and gated recurrent units (GRUs), only the readout coefficients denoted by W out from the reservoir layer to the output layer need to be trained for a particular task for RC. The internal network parameters, namely the adjacency matrix W in from the input layer to the reservoir layer, and the connections inside the reservoir W are untrained, which are fixed and random [67] or in a regular topology [69][70][71]. In the training phase of conventional reservoir computing architectures, the reservoir state is collected at each discrete time step n following…”
Section: Optical Reservoir Computingmentioning
confidence: 99%