2017
DOI: 10.1109/tnnls.2016.2514275
|View full text |Cite
|
Sign up to set email alerts
|

Growing Echo-State Network With Multiple Subreservoirs

Abstract: An echo-state network (ESN) is an effective alternative to gradient methods for training recurrent neural network. However, it is difficult to determine the structure (mainly the reservoir) of the ESN to match with the given application. In this paper, a growing ESN (GESN) is proposed to design the size and topology of the reservoir automatically. First, the GESN makes use of the block matrix theory to add hidden units to the existing reservoir group by group, which leads to a GESN with multiple subreservoirs.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
35
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 164 publications
(42 citation statements)
references
References 31 publications
0
35
0
1
Order By: Relevance
“…A scale-free highly-clustered ESN was proposed in [20], which used a small world network and scale-free network topology to create a reservoir. A growing ESN was proposed in the approach of [21], which could set the size and topology of the reservoir automatically. The simulation results showed that the growing ESN is better than the original ESN (with fixed size and topology) in predictive performance and learning speed; (b).…”
Section: Introductionmentioning
confidence: 99%
“…A scale-free highly-clustered ESN was proposed in [20], which used a small world network and scale-free network topology to create a reservoir. A growing ESN was proposed in the approach of [21], which could set the size and topology of the reservoir automatically. The simulation results showed that the growing ESN is better than the original ESN (with fixed size and topology) in predictive performance and learning speed; (b).…”
Section: Introductionmentioning
confidence: 99%
“…To guarantee ESP without the reservoir weight scale, a reservoir production method has been proposed using the eigenvalue decomposition, and the convergence of the algorithm is also theoretically guaranteed. However, eigenvalues are still generated randomly [9].…”
Section: Introductionmentioning
confidence: 99%
“…The input-to-hidden and hidden-to-hidden (recurrent) weights are randomly initialized and fixed during the learning stage. By learning the state dynamics generated from the untrained recurrent hidden layer, ESNs can avoid the laborious process of gradient-descent RNN training, yet achieve excellent performance in time series prediction (Chatzis & Demiris, 2011;Jaeger & Haas, 2004;Li, Han, & Wang, 2012;Lukoševičius & Jaeger, 2009;Qiao, Li, Han, & Li, 2017;Shi & Han, 2007), speech recognition (TSC) tasks has not been fully explored. In recent years, some researchers have applied ESNs to TSC tasks.…”
Section: Introductionmentioning
confidence: 99%