2011
DOI: 10.1016/j.cnsns.2010.10.030
|View full text |Cite
|
Sign up to set email alerts
|

Neural network method for determining embedding dimension of a time series

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(20 citation statements)
references
References 16 publications
0
20
0
Order By: Relevance
“…At 1,024 Hz, this provided a frequency resolution of 17 Hz, which was then zero-padded to a frequency resolution of 2 Hz. There are other methods to determine the embedding dimension (Cao, 1997; Maus and Sprott, 2011); it is beyond the scope of this paper to include an exhaustive discussion of these methods.…”
Section: Resultsmentioning
confidence: 99%
“…At 1,024 Hz, this provided a frequency resolution of 17 Hz, which was then zero-padded to a frequency resolution of 2 Hz. There are other methods to determine the embedding dimension (Cao, 1997; Maus and Sprott, 2011); it is beyond the scope of this paper to include an exhaustive discussion of these methods.…”
Section: Resultsmentioning
confidence: 99%
“…It has been demonstrated in Maus and Sprott that E 1( m ) would remain almost stable when m is greater than the threshold m 0 (the minimal embedding dimension that we wish to obtain).…”
Section: Icing Prediction Modeling Using Psr and Multivariate Time Sementioning
confidence: 91%
“…There are many machine learning algorithms, eg, BPNN, self‐organizing map, RNN, RBM, and SVR, that can be used to solve this regression task . One of the most powerful nonlinear regression algorithms, SVR, is adopted in this paper to fit F τ .…”
Section: Icing Prediction Modeling Using Psr and Multivariate Time Sementioning
confidence: 99%
“…The lag selection is performed as a postprocessing stage in Maus and Sprott (2011) with a sensitivity computation of the output to each time lag. The initial stage trains a single-layer, feed-forward ANN based on d time lags, with d chosen large enough to capture the relevant dynamics of the time series.…”
Section: Lags Selectionmentioning
confidence: 99%