2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC) 2022
DOI: 10.1109/itsc55140.2022.9922451
|View full text |Cite
|
Sign up to set email alerts
|

PreTR: Spatio-Temporal Non-Autoregressive Trajectory Prediction Transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…This was achieved by flattening the agent information and temporal sequence to apply attention simultaneously. Achaji [13] pointed out the time-consuming nature of using merged attention in Agentformer [12] and proposed a method based on temporal-spatial divided attention. They also utilized a non-autoregressive model based on learnable queries to enable parallel application of Transformers.…”
Section: A Trajectory Prediction On Bird's Eye Viewmentioning
confidence: 99%
See 1 more Smart Citation
“…This was achieved by flattening the agent information and temporal sequence to apply attention simultaneously. Achaji [13] pointed out the time-consuming nature of using merged attention in Agentformer [12] and proposed a method based on temporal-spatial divided attention. They also utilized a non-autoregressive model based on learnable queries to enable parallel application of Transformers.…”
Section: A Trajectory Prediction On Bird's Eye Viewmentioning
confidence: 99%
“…Furthermore, if both training and inference operate autoregressive manner applying previous predictions, it hampers the utilization of parallelization power during training, resulting in inefficient learning time. Therefore, taking inspiration from [13], we adopt a learnable query with a temporal dimension of N p . By incorporating the learnable query, parallel decoding can be performed during both training and inference, effectively addressing the aforementioned issues.…”
Section: Transformer-based Trajectory Predictionmentioning
confidence: 99%
“…Modeling spatial interactions can guide the model to avoid pedestrian collisions for more realistic trajectory prediction. Current temporal interaction modeling approaches for capturing historical motion factors of pedestrians typically employ LSTM [14,15,16,17,18,19] or GRU [20,21], Transformers [22,23,24,25], and graph convolution neural networks [26,27,28]. Existing spatial interaction modeling methods capture social interactions through pooling mechanisms [14,15,29], attention mechanisms [18,23,24,30,31], and graph convolutions and their variants [21,26,27,28,32].…”
Section: Introductionmentioning
confidence: 99%
“…Previous trajectory prediction methods for modeling spatial and temporal interactions have typically summed [26,27,30] or sequenced [24,28] the features of both. However, none of these methods consider the relationship between spatial and temporal interactions.…”
Section: Introductionmentioning
confidence: 99%