2018
DOI: 10.1007/978-3-030-01249-6_19
|View full text |Cite
|
Sign up to set email alerts
|

Action Anticipation with RBF Kernelized Feature Mapping RNN

Abstract: We introduce a novel Recurrent Neural Network-based algorithm for future video feature generation and action anticipation called feature mapping RNN . Our novel RNN architecture builds upon three effective principles of machine learning, namely parameter sharing, Radial Basis Function kernels and adversarial training. Using only some of the earliest frames of a video, the feature mapping RNN is able to generate future features with a fraction of the parameters needed in traditional RNN. By feeding these future… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
77
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 73 publications
(82 citation statements)
references
References 55 publications
0
77
1
Order By: Relevance
“…In Tab. 2, the results for UCF101-24 shows that our model is able to outperform RBF-RNN [45] by 0.9% while in Tab. 3 we outperform [45] on the UTI dataset by 1.3% at the earliest setting.…”
Section: Comparison To the State-of-the-art Methodsmentioning
confidence: 82%
See 4 more Smart Citations
“…In Tab. 2, the results for UCF101-24 shows that our model is able to outperform RBF-RNN [45] by 0.9% while in Tab. 3 we outperform [45] on the UTI dataset by 1.3% at the earliest setting.…”
Section: Comparison To the State-of-the-art Methodsmentioning
confidence: 82%
“…The authors of RBF-RNN [45] use a GAN learning process where the loss function is also automatically learnt. Similar to the proposed architecture, the RBF-RNN [45] model also utilises the spatial representation of the scene through a Deep CNN model and tries to predict the future scene representations. However in contrast to the proposed architecture this method does not utilise temporal features, or joint learning.…”
Section: Comparison To the State-of-the-art Methodsmentioning
confidence: 99%
See 3 more Smart Citations