2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV) 2016
DOI: 10.1109/cgiv.2016.66
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning for Real Time Poses Classification Using Kinect Skeleton Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 1 publication
0
7
0
Order By: Relevance
“…There are many researchers who apply machine learning algorithms on Kinect data for pose and gesture classification, e.g. [48,49]. Since Kinect V1 cannot detect finer movements and motion capture is still not 100% accurate, applying learning algorithms can ameliorate the results.…”
Section: Methodsmentioning
confidence: 99%
“…There are many researchers who apply machine learning algorithms on Kinect data for pose and gesture classification, e.g. [48,49]. Since Kinect V1 cannot detect finer movements and motion capture is still not 100% accurate, applying learning algorithms can ameliorate the results.…”
Section: Methodsmentioning
confidence: 99%
“…This solution gives complete body information, including articulations of the upper and lower limbs, without needing a depth sensor camera or fiducial markers on the body. Choubik and Mahmoudi (2016) have successfully classified human poses using a feature vector calculated from the Kinect skeleton structure. The vocabulary of the classifier had 18 poses associated with both arms.…”
Section: Motion Capture Approachesmentioning
confidence: 99%
“…The rotation format keeps relative value between the points, similar to the difference between points seen in related work (Choubik andMahmoudi, 2016; Ijjina andMohan, 2014). Choubik and Mahmoudi (2016) use the difference between the joints to classify the person's pose.…”
Section: Skeleton Data and Preprocessingmentioning
confidence: 99%
See 1 more Smart Citation
“…In visionbased action recognition, the common approach is to extract image features from video data and to issue a corresponding action class label (Poppe, 2010;Babiker et al, 2018). Nevertheless, when skeleton representation of the human body is used, the most privileged discriminative features are the raw data coming from the skeletal tracking (joint spatial coordinates) (Patsadu et al, 2012;Youness and Abdelhak, 2016) or some indices expressing geometric relations between certain body points, such as: the vertical distance from hip joint to room floor (Visutarrom et al, 2014(Visutarrom et al, , 2015, the distance between the right toe and the plane spanned by the left ankle, the left hip and the foot for a fixed pose (Müller et al, 2005) the distance between two joints, two body segments, or a joint and a body segment (Yang and Tian, 2014), the relative angle between two segments within the body kinematic chain (Müller et al, 2005) and finally, the size of the 3D bounding box enclosing the body skeleton (Bevilacqua et al, 2014). Geometric features are synthetic in the sense that they express a single geometric aspect making them particularly robust to spatial variations that are not correlated with the aspect of interest (Müller et al, 2005).…”
Section: Introductionmentioning
confidence: 99%