2021
DOI: 10.3390/ijgi10050336
|View full text |Cite
|
Sign up to set email alerts
|

A Dynamic and Static Context-Aware Attention Network for Trajectory Prediction

Abstract: Forecasting the motion of surrounding vehicles is necessary for an autonomous driving system applied in complex traffic. Trajectory prediction helps vehicles make more sensible decisions, which provides vehicles with foresight. However, traditional models consider the trajectory prediction as a simple sequence prediction task. The ignorance of inter-vehicle interaction and environment influence degrades these models in real-world datasets. To address this issue, we propose a novel Dynamic and Static Context-aw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…DSCAN [ 46 ]: This method uses a constraint network and models attention between vehicles to extract the weights to make future predictions.…”
Section: Methodsmentioning
confidence: 99%
“…DSCAN [ 46 ]: This method uses a constraint network and models attention between vehicles to extract the weights to make future predictions.…”
Section: Methodsmentioning
confidence: 99%
“…[31,76,79] first incorporate the map information into each vehicle by applying cross-attention and then use the attention operation among the vehicles to obtain interaction features. Yu et al [77] first learn spatial proximity information based on the encoding of the grid map, and then the influence of different grid cells on the future motion of the target vehicle is obtained by using the attention mechanism. To fully extract the interaction information, some works [32,73] use the multi-headed attention mechanism to consider the interaction in different dimensions.…”
Section: Pairwise-basedmentioning
confidence: 99%
“…[76] 2021 1D-CNN, GRU, AM [79] 2022 1D-CNN, GRU, GCN, AM [31] 2021 LSTM, AM [32] 2021 LSTM, AM [73] 2020 LSTM, AM [77] 2021 LSTM, CNN, AM [60] 2022 LSTM, MLP [80] 2022 Transformer, MLP [81] 2023 GAN, GCN, FPN, AM Encode the correlation features between each surrounding vehicle and the target vehicle based on a fully connected layer. [46] 2020 GNN, AM, MLP Construct a fully connected undirected graph, and then extract the interaction feature based on graph attention network.…”
Section: Class Characteristics Work Year DL Approaches Summarymentioning
confidence: 99%
See 1 more Smart Citation
“…Kim, Kum and Choi [160] presented a recursive prediction framework, where the predicted trajectory is also used as input feature in an LSTM-based encoder-decoder architecture. Similar to other approaches, the attention mechanism appears between the encoder and decoder components to selectively highlight specific features of the context vector before the decoder takes it as input [101], [161]. However, an attention mechanism can also extract features from specific contexts, such as in Zhang et al [162] that proposed a module for social features only, which extract spacial and temporal interaction features that are concatenated with other context vectors from other encoders before being fed to the decoder.…”
Section: K Attention Mechanismmentioning
confidence: 99%