2001
DOI: 10.1023/a:1011045100817
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2001
2001
2021
2021

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Many techniques have been developed for prediction of time series, including autoregressive moving average (Box et al 1994), nearest neighborhood prediction (Garcia and Almeida 2005), adaptive filters (Hartmann 2010), Kalman filters (Silva 2010), radial basis function (Leung et al 2001, Zou et al 2003, Song et al 2005, wavelet decomposition (Wu 2010), artificial neural networks (Bertels et al 2001, Han et al 2004, Hastie et al 2001, Haykin 1999, Rodrigues 2010, principal component analysis (Petrolis et al 2010), and many others (Kantz and Schreiber 2004, Li et al 2010, Langley et al 2010. Our preliminary studies using a few different techniques indicate that neural networks are more robust compared to others.…”
Section: Methodsmentioning
confidence: 99%
“…Many techniques have been developed for prediction of time series, including autoregressive moving average (Box et al 1994), nearest neighborhood prediction (Garcia and Almeida 2005), adaptive filters (Hartmann 2010), Kalman filters (Silva 2010), radial basis function (Leung et al 2001, Zou et al 2003, Song et al 2005, wavelet decomposition (Wu 2010), artificial neural networks (Bertels et al 2001, Han et al 2004, Hastie et al 2001, Haykin 1999, Rodrigues 2010, principal component analysis (Petrolis et al 2010), and many others (Kantz and Schreiber 2004, Li et al 2010, Langley et al 2010. Our preliminary studies using a few different techniques indicate that neural networks are more robust compared to others.…”
Section: Methodsmentioning
confidence: 99%
“…For the hidden nodes, the error can be computed in the following way: (5) where ␦ pk W kj represents the error of the connected neurons of the above layer, multiplied by the corresponding weights, which is propagated throughout the net. For exactly the same reasons as mentioned above, the first derivative of the transfer function is included.…”
Section: Figurementioning
confidence: 99%
“…For a detailed discussion of neural networks, we refer to Ref. [2], which previously appeared in Complexity, and for an extensive discussion of the results reported here we refer to Refs.[3], [4], and [5]. The following restrictions apply to the research reported on in this article.…”
mentioning
confidence: 99%