2017
DOI: 10.1016/j.neucom.2016.12.089
|View full text |Cite
|
Sign up to set email alerts
|

Deep reservoir computing: A critical experimental analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
288
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 373 publications
(289 citation statements)
references
References 11 publications
1
288
0
Order By: Relevance
“…Here, tanh indicates the element-wise application of the hyperbolic tangent nonlinearity, u(t) ∈ R NU represents the external input at time-step t, while W in , W (l) andŴ (l) respectively denote the input weight matrix (that modulates the external input stimulation to the first layer), the inter-layer weight matrix for layer l (that modulates the strength of the connections from layer l − 1 to layer l), and the recurrent reservoir weight matrix for layer l. In both the above equations 1 and 2 we omitted the bias terms for the ease of notation. The interested reader can find in [7] a more detailed description of the deep reservoir equations, framed in the more general context of leaky integrator reservoir units. In order to set up initial conditions for the state update equations 1 and 2, at time-step 0 all reservoir layers are set to a null state, i.e.…”
Section: Deep Echo State Networkmentioning
confidence: 99%
“…Here, tanh indicates the element-wise application of the hyperbolic tangent nonlinearity, u(t) ∈ R NU represents the external input at time-step t, while W in , W (l) andŴ (l) respectively denote the input weight matrix (that modulates the external input stimulation to the first layer), the inter-layer weight matrix for layer l (that modulates the strength of the connections from layer l − 1 to layer l), and the recurrent reservoir weight matrix for layer l. In both the above equations 1 and 2 we omitted the bias terms for the ease of notation. The interested reader can find in [7] a more detailed description of the deep reservoir equations, framed in the more general context of leaky integrator reservoir units. In order to set up initial conditions for the state update equations 1 and 2, at time-step 0 all reservoir layers are set to a null state, i.e.…”
Section: Deep Echo State Networkmentioning
confidence: 99%
“…The above analysis also shows that the time depends on the training process, which is closely relevant to the longterm time series prediction 4 . Next, we compare the performance of the given method with that of two benchmark methods (LSTNet [55] and DeepESN [51]) to check if they can improve predictive performance, especially for longterm series prediction. Tab.…”
Section: The Evolution Of Deep Echo State Network For Time Series Prmentioning
confidence: 99%
“…To this end, various architectures have been proposed with multiple reservoirs, additional projections, autoencoders, plasticity mechanisms, etc. [5,6,12,14,29,32].…”
Section: Architecturementioning
confidence: 99%
“…a time series). The MLE, denoted λ max , of a Mod-DeepESN instance can be computed for a given set of input sequences by (12)…”
Section: Lyapunov Exponentmentioning
confidence: 99%