TENCON 2021 - 2021 IEEE Region 10 Conference (TENCON) 2021
DOI: 10.1109/tencon54134.2021.9707226
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Action Localization Crop in Video Retargeting for 3D ConvNets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…As such, we have reached a stage where researchers are moving on to improve performance on even complex computer vision problems, viz. 3D object detection [32], action detection and localization [3] tracking objects across videos, event recognition and scene understanding [33][4], etc. In this chapter, we focus on the task of event/activity recognition in videos, done with the assistance of frame-wise object detection, which enables inter-frame tracking of objects.…”
Section: Video Activity Recognition Assisted By Object Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…As such, we have reached a stage where researchers are moving on to improve performance on even complex computer vision problems, viz. 3D object detection [32], action detection and localization [3] tracking objects across videos, event recognition and scene understanding [33][4], etc. In this chapter, we focus on the task of event/activity recognition in videos, done with the assistance of frame-wise object detection, which enables inter-frame tracking of objects.…”
Section: Video Activity Recognition Assisted By Object Detectionmentioning
confidence: 99%
“…The task of video event recognition [2] is to predict the ongoing event or activity in a video throughout its duration. The prevalent approach is to obtain a global video-level feature representation [3] from 3D CNNs and classify the video using the same. However, this is generally insufficient for fine-grained tasks.…”
Section: Introductionmentioning
confidence: 99%