2021
DOI: 10.3390/e23111491
|View full text |Cite
|
Sign up to set email alerts
|

Entanglement-Structured LSTM Boosts Chaotic Time Series Forecasting

Abstract: Traditional machine-learning methods are inefficient in capturing chaos in nonlinear dynamical systems, especially when the time difference Δt between consecutive steps is so large that the extracted time series looks apparently random. Here, we introduce a new long-short-term-memory (LSTM)-based recurrent architecture by tensorizing the cell-state-to-state propagation therein, maintaining the long-term memory feature of LSTM, while simultaneously enhancing the learning of short-term nonlinear complexity. We s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 34 publications
(46 reference statements)
0
1
0
Order By: Relevance
“…LSTMs can effectively deal with the "long-term dependency" problem in RNNs, which can capture deeper connections in the sequences. Therefore, in recent years, LSTM has been widely used in time series forecasting [44,45].…”
Section: Methods For Components Forecastingmentioning
confidence: 99%
See 1 more Smart Citation
“…LSTMs can effectively deal with the "long-term dependency" problem in RNNs, which can capture deeper connections in the sequences. Therefore, in recent years, LSTM has been widely used in time series forecasting [44,45].…”
Section: Methods For Components Forecastingmentioning
confidence: 99%
“…In addition, the loss function used in all LSTM neural networks in this paper is the mean square error function. In order to ensure an optimal network performance, enhance convergence speed, and prevent gradient explosion, it is imperative to normalize the input data [45,49]. The normalization of the input data in this paper is carried out according to the following formula:…”
Section: Eemd-lstm-pso Modelmentioning
confidence: 99%
“…Rather, it suffices to simply store the vectors a, b, c, • • • of which the total size is DN. This, indeed, is how a tensor network works-by leveraging different ways of factorization that can be depicted through different graphical network structures [46]. Among various tensor networks, the matrix product state (MPS) is among the most researched [47].…”
Section: Quantum Network Are the Basis Of Tensor Networkmentioning
confidence: 99%
“…Such that in the article ( Yang, Krompass & Tresp, 2017 ) authors use Tensor-Train decomposition in order to deal with high-dimensional inputs, obtaining competitive results in comparison to classical approaches for dimensionality reduction. In Meng & Yang (2021) and Yu et al (2017) scholars apply tensorization in order to enhance RNN, LSTM models for long-term forecasting of chaotic time series. Using tensors for data representation proved to be effective in natural language problems.…”
Section: Literature Reviewmentioning
confidence: 99%