2019
DOI: 10.1016/j.neucom.2019.05.068
|View full text |Cite
|
Sign up to set email alerts
|

Effects of singular value spectrum on the performance of echo state network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 39 publications
0
4
0
Order By: Relevance
“…Reservoir computing is grounded in the idea of using a large randomly and sparsely connected recurrent layer called a reservoir. This method presents an efficient alternative to gradient-based learning algorithms for designing and training RNNs in most cases [ 49 ]. The echo state network (ESN) and its variants have stimulated researchers’ interest in recent decades owing to simple model training.…”
Section: Introductionmentioning
confidence: 99%
“…Reservoir computing is grounded in the idea of using a large randomly and sparsely connected recurrent layer called a reservoir. This method presents an efficient alternative to gradient-based learning algorithms for designing and training RNNs in most cases [ 49 ]. The echo state network (ESN) and its variants have stimulated researchers’ interest in recent decades owing to simple model training.…”
Section: Introductionmentioning
confidence: 99%
“…It has conventionally been chosen to be slightly less than 1. Moreover, the weight matrix in the reservoir layer W res is scaled by the spectral radius in order to balance the validity of the ESP and the performance according to [26,27]. Lastly, the resulting output response at time t can be described by…”
Section: Introductionmentioning
confidence: 99%
“…Steiner et al [11] used the Kmeans algorithm for unsupervised initialization of the input weight matrix, which made the network performance comparable or better than that of a randomly initialized ESN, while requiring significantly fewer reservoir neurons. Li et al [12] designed reservoirs with predefined sparsity and singular values. Chen et al [13] constructed an incremental inverse-free ESN to obtain weights according to the information of the previous optimal reservoir state and the freshly added neurons in the reservoir.…”
Section: Introductionmentioning
confidence: 99%