2013 IEEE Conference on Computer Vision and Pattern Recognition 2013
DOI: 10.1109/cvpr.2013.343
|View full text |Cite
|
Sign up to set email alerts
|

Recognize Human Activities from Partially Observed Videos

Abstract: Recognizing human activities in partially observed videos is a challenging problem and has many practical applications. When the unobserved subsequence is at the end of the video, the problem is reduced to activity prediction from unfinished activity streaming, which has been studied by many researchers. However, in the general case, an unobserved subsequence may occur at any time by yielding a temporal gap in the video. In this paper, we propose a new method that can recognize human activities from partially … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
161
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 175 publications
(161 citation statements)
references
References 22 publications
0
161
0
Order By: Relevance
“…These include running the actual released code for C2 [24], Action Bank [57], Stacked ISA [36], and VHTK [44]. We also obtained the code for Cao's method [6], Cao's reimplementation [6] of Ryoo's method [56], and Retinotopic [3] from the authors. We also employ a number of other recent methods, including Dense Trajectories [72,73], Improved Trajectories [74], C3D [69], and the methods of Simonyan and Zisserman [60], Ng et al [46], and Xu et al [80].…”
Section: Baseline Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…These include running the actual released code for C2 [24], Action Bank [57], Stacked ISA [36], and VHTK [44]. We also obtained the code for Cao's method [6], Cao's reimplementation [6] of Ryoo's method [56], and Retinotopic [3] from the authors. We also employ a number of other recent methods, including Dense Trajectories [72,73], Improved Trajectories [74], C3D [69], and the methods of Simonyan and Zisserman [60], Ng et al [46], and Xu et al [80].…”
Section: Baseline Experimentsmentioning
confidence: 99%
“…The dataset used for the THUMOS Challenge differs from the LCA dataset in several ways. First, the THUMOS Challenge uses trimmed videos from 6 Comparison between machine-human and human-human intercoder agreement on the LCA dataset, using the evaluation metric from the THUMOS Challenge [25], comparing against a single human annotator: cbushman UCF101 [65] as the training set; only the validation and test sets involve untrimmed videos. The LCA dataset is partitioned into five sets of videos for training and test; each set of videos consists of untrimmed videos.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In [33], a probabilistic framework for activity recognition from partially observed videos is introduced. Sparse coding is used to estimate posterior probabilities with bag of visual words representation.…”
Section: Probabilistic Approachesmentioning
confidence: 99%
“…Their experiments are focused on trajectories prediction, but the proposal has been presented for general situations. Cao et al presented in [5] a sparse coding usage and subsamples of the sequence to predict posterior activities for partially observed sequences. Uddin et al proposed in [24] a Human Activity Prediction (HPA) system which uses spanning-trees to predict and recognize activities.…”
mentioning
confidence: 99%