2022
DOI: 10.1109/tpami.2020.3029425
|View full text |Cite
|
Sign up to set email alerts
|

A Dynamic Frame Selection Framework for Fast Video Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
31
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(32 citation statements)
references
References 40 publications
1
31
0
Order By: Relevance
“…Given the limited space, we introduce them in Appendix A. Following the common practice [19,40,47,48,64,68,69], we evaluate the performance of different methods via mean average precision (mAP) and Top-1 accuracy (Top-1 Acc.) on ActivityNet/FCVID and other datasets, respectively.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Given the limited space, we introduce them in Appendix A. Following the common practice [19,40,47,48,64,68,69], we evaluate the performance of different methods via mean average precision (mAP) and Top-1 accuracy (Top-1 Acc.) on ActivityNet/FCVID and other datasets, respectively.…”
Section: Methodsmentioning
confidence: 99%
“…Temporal redundancy. A popular approach for facilitating efficient video recognition is to reduce the temporal redundancy in videos [19,21,35,36,47,57,67,68,72]. Since not all frames are equally important for a given task, the model should ideally allocate less computation on less informative frames [24].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Multi-agent has also been proposed in [5] to formulate the frame sampling as multiple parallel Markov decision processes. More recently, Wu et al [42] trained a policy network which contains a long short-term memory to provide the context information, it can interact with the video sequence and decide which frames to use dynamically. Besides, another group of methods focus on reducing spatial redundancy.…”
Section: B Reducing Spatial and Temporal Redundancymentioning
confidence: 99%