2012 IEEE International Conference on Robotics and Automation 2012
DOI: 10.1109/icra.2012.6224777
|View full text |Cite
|
Sign up to set email alerts
|

Sparse representation of point trajectories for action classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…Tokyo, Japan 978-1-4673-6358-7/13/$31.00 ©2013 IEEE A large part of the action recognition methodology in this work most closely resembles the work of Sivalingam et al out of the same lab as this work [7]. When trying a NN algorithm and comparing results on segmented data from the Kinect and the accompanying Application Programming Interface (API), it was found that the trained classifiers produced satisfactory results without any additional, more complicated or computationally expensive algorithm.…”
Section: Ieee/rsj International Conference Onmentioning
confidence: 65%
“…Tokyo, Japan 978-1-4673-6358-7/13/$31.00 ©2013 IEEE A large part of the action recognition methodology in this work most closely resembles the work of Sivalingam et al out of the same lab as this work [7]. When trying a NN algorithm and comparing results on segmented data from the Kinect and the accompanying Application Programming Interface (API), it was found that the trained classifiers produced satisfactory results without any additional, more complicated or computationally expensive algorithm.…”
Section: Ieee/rsj International Conference Onmentioning
confidence: 65%
“…In [24], advantages of both dense and sparse sampling are combined, and descriptors are extracted on a dense grid pruned either randomly or based on a sparse saliency mask of the underlying video. In [25], the authors compare two different representation schemes, raw multivariate time-series data and the covariance descriptors of the trajectories, and apply sparse representation techniques for classifying the various actions. The features are sparse coded using the orthogonal matching pursuit algorithm, and the gestures and actions are classified based on the reconstruction residuals.…”
Section: Previous Workmentioning
confidence: 99%
“…109 Consequently, how to e®ectively represent the action trajectories is key. In their study, two di®erent representation schemes, one based on raw multivariate time-series data, and the other based on the covariance descriptors of the trajectories.…”
Section: Other Approachesmentioning
confidence: 99%