2016
DOI: 10.1016/j.neunet.2015.07.006
|View full text |Cite
|
Sign up to set email alerts
|

A decentralized training algorithm for Echo State Networks in distributed big data applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
56
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 109 publications
(56 citation statements)
references
References 46 publications
0
56
0
Order By: Relevance
“…We propose, instead, to use information derived from RR (8), DET (9), LAM (12), ENTR (13), and SWRP (14) to determine the edge of stability. In fact, we observe that, when such indices start to fluctuate, the network achieves high prediction accuracy.…”
Section: B Design Of a Stable And Effective Networkmentioning
confidence: 99%
See 3 more Smart Citations
“…We propose, instead, to use information derived from RR (8), DET (9), LAM (12), ENTR (13), and SWRP (14) to determine the edge of stability. In fact, we observe that, when such indices start to fluctuate, the network achieves high prediction accuracy.…”
Section: B Design Of a Stable And Effective Networkmentioning
confidence: 99%
“…It is possible to rely on specific RQA measures, such as DET (9), ENTR (13), and SWRP (14), to numerically quantify the amount of time dependence. All three indices would yield very low values (close to zero) when there is no time-dependence in the signal.…”
Section: Visualization and Classification Of Reservoir Dynamicsmentioning
confidence: 99%
See 2 more Smart Citations
“…Among them, we can cite distributed protocols for support vector machines [7], functional-link networks [8], linear neurons [9], adaptive resonance theory (ART) networks [10], and many others. However, as we argued in [11], what is needed in many contexts is a distributed training algorithm for recurrent neural networks (RNNs). Thanks to the presence of recurrent connections, RNNs are able to efficiently capture the dynamics in the underlying process to be learned.…”
Section: Introductionmentioning
confidence: 99%