2015
DOI: 10.1016/j.neunet.2015.08.010
|View full text |Cite
|
Sign up to set email alerts
|

Prediction of telephone calls load using Echo State Network with exogenous variables

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
58
0
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 78 publications
(60 citation statements)
references
References 46 publications
1
58
0
1
Order By: Relevance
“…First, we compare λ with L max , the value of the longest diagonal line in an RP-see (10) and related discussion. L max is a global indicator of stability that we show here to be highly correlated with λ. Successively, we show that the edge of stability determined according to (15) identifies more accurately the region of the hyperparameter space where γ is maximized. To produce more interpretable results, for both input signals, we map independently the values of each RQA measure, obtained for the different values of ρ and ω i , in the [0, 1] interval by using a unity-based normalization.…”
Section: B Characterization Of Reservoirs With Rqamentioning
confidence: 89%
See 2 more Smart Citations
“…First, we compare λ with L max , the value of the longest diagonal line in an RP-see (10) and related discussion. L max is a global indicator of stability that we show here to be highly correlated with λ. Successively, we show that the edge of stability determined according to (15) identifies more accurately the region of the hyperparameter space where γ is maximized. To produce more interpretable results, for both input signals, we map independently the values of each RQA measure, obtained for the different values of ρ and ω i , in the [0, 1] interval by using a unity-based normalization.…”
Section: B Characterization Of Reservoirs With Rqamentioning
confidence: 89%
“…This is justified by the fact that high-amplitude signals tend to saturate nonlinear activation functions and cause the poles to shrink toward the origin. This results in a system with (15). For the sake of readability, here we show only three RQA measures; see Table II for detailed results.…”
Section: B Characterization Of Reservoirs With Rqamentioning
confidence: 99%
See 1 more Smart Citation
“…Then, a feedforward output layer, called readout, is trained on top of the reservoir, using standard techniques from linear approximation theory, most notably ridge regression. Despite this simplification, ESNs have obtained remarkable results in many fields, including medical image segmentation [21], load prediction [22], language generation [23], and several others. Based on the strict separation between reservoir and readout, in [11] we proposed a distributed algorithm for training ESNs, by using the well-known optimization routine known as alternating direction method of multipliers (ADMM) [24].…”
Section: Introductionmentioning
confidence: 99%
“…At the same time, standard ridge regression may not be the most suitable training algorithm for ESNs, as demonstrated by works exploring readouts trained via support vector based algorithms [27], elastic net penalties [22], and others. Specifically, in this paper we are concerned with training a sparse readout, i.e.…”
Section: Introductionmentioning
confidence: 99%