2022
DOI: 10.1109/tvt.2022.3178094
|View full text |Cite
|
Sign up to set email alerts
|

FDSA-STG: Fully Dynamic Self-Attention Spatio-Temporal Graph Networks for Intelligent Traffic Flow Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(9 citation statements)
references
References 37 publications
0
8
0
Order By: Relevance
“…However, in different scenarios, the demands placed on the grid are also different [7]. For example, in scenarios such as fault identification and prediction, a large amount of AI computing and network bandwidth energy is required for real-time perception and accurate prediction [8]. However, these energy sources are not abundant at this stage, so how reasonably to allocate AI computing and bandwidth energy has become a difficulty in the development of smart grids.…”
Section: B Pose Challengesmentioning
confidence: 99%
“…However, in different scenarios, the demands placed on the grid are also different [7]. For example, in scenarios such as fault identification and prediction, a large amount of AI computing and network bandwidth energy is required for real-time perception and accurate prediction [8]. However, these energy sources are not abundant at this stage, so how reasonably to allocate AI computing and bandwidth energy has become a difficulty in the development of smart grids.…”
Section: B Pose Challengesmentioning
confidence: 99%
“…Chen et al [37] developed a prediction model called a location graph convolutional network (location-GCN) by designing a novel graph convolution to construct a dynamic adjacency matrix, and combined it with LSTM. Although these models have achieved some success in traffic flow prediction, they have limitations in capturing the important spatio-temporal features [38].…”
Section: Spatio-temporal Feature Extractionmentioning
confidence: 99%
“…As shown in Figure 4, the decoder focuses on modelling both the seasonal periodic and trend periodic part, resulting in a more complex structure in contrast to the encoder. There are M layers in the decoder, and the overall equations for the l th layer of the decoder are summarized as Equation (38). Details are shown in Equations ( 39) to (42).…”
Section: Decodermentioning
confidence: 99%
“…Jin et al [28] put forward GAN-Based Short-Term Link traffic prediction under a parallel learning framework (PL-WGAN) for urban networks which added spatial-temporal attention mechanism to adjust the importance of different temporal and spatial contexts. In addition, Duan et al [29] applied fully dynamic self-attention spatiotemporal graph network (FDSA-STG) by improving the attention mechanism using graph attention networks (GATs). This model jointly modified the GATs and the self-attention mechanism that fully dynamically focused and integrated spatial, temporal and periodic correlations.…”
Section: B Attention Mechanism In Time Series Data Predictionmentioning
confidence: 99%