2020
DOI: 10.1609/aaai.v34i07.6819
|View full text |Cite
|
Sign up to set email alerts
|

Self-Attention ConvLSTM for Spatiotemporal Prediction

Abstract: Spatiotemporal prediction is challenging due to the complex dynamic motion and appearance changes. Existing work concentrates on embedding additional cells into the standard ConvLSTM to memorize spatial appearances during the prediction. These models always rely on the convolution layers to capture the spatial dependence, which are local and inefficient. However, long-range spatial dependencies are significant for spatial applications. To extract spatial features with both global and local dependencies, we int… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
64
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 164 publications
(65 citation statements)
references
References 13 publications
0
64
0
1
Order By: Relevance
“…The prediction of polar vortex intensity can be improved by using the method of the benchmark model. The combination of the self-attention mechanism and the ConvLSTM model adopted in spatiotemporal prediction achieves state-of-the-art results [95]. Using the most advanced attention mechanism to predict a certain day or a certain strong/weak event of the polar vortex in a long time series may achieve better results.…”
Section: Discussionmentioning
confidence: 99%
“…The prediction of polar vortex intensity can be improved by using the method of the benchmark model. The combination of the self-attention mechanism and the ConvLSTM model adopted in spatiotemporal prediction achieves state-of-the-art results [95]. Using the most advanced attention mechanism to predict a certain day or a certain strong/weak event of the polar vortex in a long time series may achieve better results.…”
Section: Discussionmentioning
confidence: 99%
“…The hidden state of the minor input only depends on its time memory, which is conducive to preserving the spatiotemporal transformation information of the minor input data. In addition, to improve the MFSP-LSTM unit's adaptability to long-distance dependence, we followed Lin's work [27] and added the SAM to the MFSP-LSTM unit. We expanded the global spatiotemporal receptive field of H t and N t by the memory Mt.…”
Section: Mfsp-lstmmentioning
confidence: 99%
“…In order to improve the longterm extrapolation ability of the spatiotemporal prediction model, HPRNN [26] proposes a hierarchical prediction strategy which reduces the accumulation of prediction errors over time through a recurrent coarse-to-fine mechanism. Lin et al [27] proposed self-attention memory (SAM) to memorize the features of long-distance dependence in terms of spatial and temporal domains. SAM can be embedded in most spatiotemporal prediction recurrent neural networks.…”
Section: Introductionmentioning
confidence: 99%
“…At the same time, deep learning has also been applied to natural language processing (NLP) [16], [17] and sequence learning [18], [19]. Long short-term memory (LSTM) [20], [21] can effectively learn the inherent relationships of long distance objects. The LSTM encoder-decoder framework [18] effectively solves the sequence-to-sequence learning problems using temporally concatenated LSTM.…”
Section: Introductionmentioning
confidence: 99%