2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops 2013
DOI: 10.1109/cvprw.2013.76
|View full text |Cite
|
Sign up to set email alerts
|

Joint Angles Similarities and HOG2 for Action Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
160
0
6

Year Published

2013
2013
2019
2019

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 263 publications
(172 citation statements)
references
References 13 publications
0
160
0
6
Order By: Relevance
“…• cross-subject evaluation: the sequences performed by 20 actors are used as training and the others as testing data. The subjects that have to be used as training are: Table 1.…”
Section: Resultsmentioning
confidence: 99%
“…• cross-subject evaluation: the sequences performed by 20 actors are used as training and the others as testing data. The subjects that have to be used as training are: Table 1.…”
Section: Resultsmentioning
confidence: 99%
“…This approach extracts low-level features from skeletal data, such as pairwise joint displacements, and uses Principal Component Analysis (PCA) to perform dimension reduction. Many other existing human representations are also based on low-level skeleton-based features [211,128,148,143,132,126,149,150,164,167,168,127,172] without modeling the hierarchy of the data. Several multi-layer techniques were also implemented to create skeleton-based human representations from low-level features.…”
Section: Representations Based On Low-level Featuresmentioning
confidence: 99%
“…Our approach differs from local feature-based approaches [25,44] and skeleton-based approaches [43,45]. We learn hierarchical kernel descriptors, which are essentially nonlinear feature mappings.…”
Section: Related Workmentioning
confidence: 99%
“…Our approach is capable of fusing RGB and depth data for classification while methods in [25,44] can only be applied to depth data. Compared with skeleton data-based approaches [43,45], we propose an elegant framework that is capable of fusing RGB and depth data while it is not clear how the evolutionary algorithm [43] can fuse various modality data. We achieve comparable results with [44,45] and outperform [43] on MSR-Action3D dataset as RGB videos are not provided in this dataset.…”
Section: Related Workmentioning
confidence: 99%