2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022
DOI: 10.1109/cvprw56347.2022.00282
|View full text |Cite
|
Sign up to set email alerts
|

Goal-driven Self-Attentive Recurrent Networks for Trajectory Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 31 publications
0
16
0
Order By: Relevance
“…Anchor-conditioned models use a set of anchor points to condition the prediction of an agent's trajectory [21,[26][27][28][29][30][31]. These anchor points can be, for example, the respective agent's estimated final goal position, which is used to condition the final trajectory estimation or even a trajectory proposal.…”
Section: Anchor-conditioned Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Anchor-conditioned models use a set of anchor points to condition the prediction of an agent's trajectory [21,[26][27][28][29][30][31]. These anchor points can be, for example, the respective agent's estimated final goal position, which is used to condition the final trajectory estimation or even a trajectory proposal.…”
Section: Anchor-conditioned Methodsmentioning
confidence: 99%
“…While Dendorfer et al [30] modeled the multimodality of the task only through goal distribution estimation, which they argued eased the task of the trajectory decoder, Mangalam et al [28] sampled goals and waypoints from the predicted distributions and generated a conditioned probability distribution for every timestep of the trajectory. Chiara et al [26] followed this approach by sampling from goal distributions and injecting random noise into the trajectory generation module. While Mangalam et al [28] performed their computations completely in image space, Chiara et al [26] only predicted non-parametric probability distributions for possible goals in image space.…”
Section: Anchor-conditioned Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, Transformer structure [63] is applied in this task [21,62,76,77] to model the spatio-temporal relations via an attention mechanism. Moreover, various viewpoints have emerged towards more practical applications, i.e., goal-driven idea [13,40,60,81], long-tail situation [39], interpretability [32], robustness [9,66,70,80], counterfactual analysis [11], planningdriven [12], generalization ability to new environment [6,27,72], and knowledge distillation [44].…”
Section: Trajectory Predictionmentioning
confidence: 99%