2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA) 2018
DOI: 10.1109/icmla.2018.00072
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Memory Capacity for Deep Echo State Networks

Abstract: In this paper, the echo state network (ESN) memory capacity, which represents the amount of input data an ESN can store, is analyzed for a new type of deep ESNs. In particular, two deep ESN architectures are studied. First, a parallel deep ESN is proposed in which multiple reservoirs are connected in parallel allowing them to average outputs of multiple ESNs, thus decreasing the prediction error. Then, a series architecture ESN is proposed in which ESN reservoirs are placed in cascade that the output of each E… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 14 publications
(20 reference statements)
0
4
0
Order By: Relevance
“…DeepESN requires a large reservoir size (U) to adequately train the randomly initialized weights of its hidden layers. The network performance with different reservoir configurations has been investigated based on datasets from both simulations and practical applications [48,49]. The general conclusion is that a higher U improves the prediction performance of DeepESN, while an excessively large U may lead to substantial computational resource consumption and overfitting.…”
Section: Deep Echo State Network (Deepesn) Performance Studymentioning
confidence: 99%
See 1 more Smart Citation
“…DeepESN requires a large reservoir size (U) to adequately train the randomly initialized weights of its hidden layers. The network performance with different reservoir configurations has been investigated based on datasets from both simulations and practical applications [48,49]. The general conclusion is that a higher U improves the prediction performance of DeepESN, while an excessively large U may lead to substantial computational resource consumption and overfitting.…”
Section: Deep Echo State Network (Deepesn) Performance Studymentioning
confidence: 99%
“…[47] reported a trade-off between the size of the reservoir and/or the number of training samples, due to the memory limit. Inheriting from ESNs, a DeepESN can achieve superior performance with an optimally configured reservoir, as discussed in [48][49][50][51]. DeepESNs with different reservoir configurations have been applied to several datasets, including nonlinear autoregressive moving average (NAMA), and for predictions pertaining to users' locations, orientations, and base station associations, respectively [49].…”
Section: Introductionmentioning
confidence: 99%
“…Next, we derive closed-form expressions of the memory capacity of the three ESN models that we described in Section III, namely, the single ESN model, the parallel ESN model, and the series ESN model. Note that, our previous work [35] analyzed the memory capacity for a centralized parallel ESN model. In contrast, here, we analyze the memory capacity for three ESN models used for federated learning.…”
Section: Memory Capacity Analysismentioning
confidence: 99%
“…Another recent related model is the deep RC. Taking ESN as an example, the typical architecture of deep ESN can be classified into the series ESN and the parallel ESN, deriving other deep structures [13,30,61].…”
Section: Related Workmentioning
confidence: 99%