2017
DOI: 10.1016/j.patrec.2017.02.001
|View full text |Cite
|
Sign up to set email alerts
|

Learning features combination for human action recognition from skeleton sequences

Abstract: Human action recognition is a challenging task due to the complexity of human movements and to the variety among the same actions performed by distinct subjects. Recent technologies provide the skeletal representation of human body extracted in real time from depth maps, which is a high discriminant information for efficient action recognition. In this context, we present a new framework for human action recognition from skeleton sequences. We propose extracting sets of spatial and temporal local features from… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 83 publications
(20 citation statements)
references
References 29 publications
0
19
1
Order By: Relevance
“…The recognition rate of our approach on the UCKinect Dataset is 98.87%. Our method outperforms the Lie group [5], Grassmann manifold [16], Eigenjoints [29], and learning feature combination [30], which achieved recognition rates of 97.08, 97.91,97.1, and 98.00%, respectively, as shown in Table 4.…”
Section: Experiments On the Uckinect-action Datasetmentioning
confidence: 87%
“…The recognition rate of our approach on the UCKinect Dataset is 98.87%. Our method outperforms the Lie group [5], Grassmann manifold [16], Eigenjoints [29], and learning feature combination [30], which achieved recognition rates of 97.08, 97.91,97.1, and 98.00%, respectively, as shown in Table 4.…”
Section: Experiments On the Uckinect-action Datasetmentioning
confidence: 87%
“…The recognition rate of our approach on the dataset is 98.23%. It is obvious that our approach outperforms SE3 [12], EigenJoints [13], Grassmann manifold [8], Key-Pose-Motifs [18], learning features combination [16], Ensemble TS-LSTM [14], tLDS [6], and Bi-LSTM [15], which achieve recognition rates of 97.08%, 97.10%, 88.5%, 93.47%, 98.00%, 96.97%, 96.48%, and 96.89%, respectively.…”
Section: Approachmentioning
confidence: 93%
“…Recognition Accuracy SE3 [12] 89.48 Grassmann Manifold [8] 91.21 Learning features combination [16] 90.36 ± 2.45 ST-LSTM + Trust Gate [17] 94.80 Bi-LSTM [15] 86.18 Our approach 96.97 approach outperforms various other methods extracting the action feature from 3D joint positions. Our approach achieves an average accuracy of 97.63% for the MSR-Action3D dataset, outperforming the other action recognition approaches.…”
Section: Approachmentioning
confidence: 98%
See 1 more Smart Citation
“…Accuracy Multi-part Bag-of-Poses [22] 82.00% Riemannian Manifold [23] 87.04% Latent Variables [24] 89.67% Lie Group [9] 90.88% Feature Combinations [25] 94…”
Section: Methodsmentioning
confidence: 99%