2016 IEEE International Conference on Image Processing (ICIP) 2016
DOI: 10.1109/icip.2016.7532918
|View full text |Cite
|
Sign up to set email alerts
|

Human action recognition based on 3D skeleton part-based pose estimation and temporal multi-resolution analysis

Abstract: International audienceHuman action recognition is a challenging field that have been addressed with many different classification techniques such as SVM or Random Decision Forests and by considering many different kinds of information joints, key poses, joints rotation matrix, angles for example. This paper presents our approach for action recognition that considers only information given by the 3D joints from the skeleton and trains a two stage random forest to classify them. We extract skeletal features by c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 11 publications
0
9
0
Order By: Relevance
“…All those studies show that using skeleton instead of purely RGB data is a good solution. Combining the skeleton with another classifier such as the Random Forest also shows interesting results [11] on the MSR-DailyActivity3D Dataset. In this study the vector representing a moment in the flow of the data is composed of all the coordinate of the joints of the skeleton and all the distances and angles between the joints.…”
Section: Skeleton-based Workmentioning
confidence: 96%
See 3 more Smart Citations
“…All those studies show that using skeleton instead of purely RGB data is a good solution. Combining the skeleton with another classifier such as the Random Forest also shows interesting results [11] on the MSR-DailyActivity3D Dataset. In this study the vector representing a moment in the flow of the data is composed of all the coordinate of the joints of the skeleton and all the distances and angles between the joints.…”
Section: Skeleton-based Workmentioning
confidence: 96%
“…This vector doesn't contain enough data to fully classify different actions. Similarly to the work done in [11], we have augmented our feature vector by compute and add all possible distances/angles between all possible pair/triplet of joints. This process ended to a feature vector of 3610 values: 3420 angle values followed by 190 distance values.…”
Section: Feature Vectormentioning
confidence: 99%
See 2 more Smart Citations
“…Instead of using relative joint position, Gori et al [16] used all possible joint pairs in the Euclidean distance sense, and track these distances over time to form relation history images of joints. In another approach, Aly Halim et al [27] calculated all possible distances and angles from triplet joints, i.e. a set of three joint locations, which amounts to 190 distances and 3420 angles.…”
Section: Updated Reviewmentioning
confidence: 99%