Proceedings of the 2009 Conference on Future Play on @ GDC Canada 2009
DOI: 10.1145/1639601.1639619
|View full text |Cite
|
Sign up to set email alerts
|

Device agnostic 3D gesture recognition using hidden Markov models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 2 publications
0
7
0
Order By: Relevance
“…This hierarchical approach, which breaks up the recognition process into actions and activities, helps to overcome the memory storage and computational power concerns of mobile devices. Other work on 3D gesture recognizers that incorporate HMMs include [Bilal et al 2011;Chen et al 2003;Kelly et al 2011;Just and Marcel 2009;Nguyen-Duc-Thanh et al 2012;Pylvninen 2005;Whitehead and Fox 2009;Zappi et al 2009]. …”
Section: Hidden Markov Modelsmentioning
confidence: 99%
“…This hierarchical approach, which breaks up the recognition process into actions and activities, helps to overcome the memory storage and computational power concerns of mobile devices. Other work on 3D gesture recognizers that incorporate HMMs include [Bilal et al 2011;Chen et al 2003;Kelly et al 2011;Just and Marcel 2009;Nguyen-Duc-Thanh et al 2012;Pylvninen 2005;Whitehead and Fox 2009;Zappi et al 2009]. …”
Section: Hidden Markov Modelsmentioning
confidence: 99%
“…As can be seen in the table, there have been a variety of different methods that have been proposed and most of the results reported are able to achieve over Author Recognition Approach No. Gestures Accuracy Pang and Ding [Pang and Ding 2013] HMMs with kinematic features 12 91.2% Wan et al [Wan et al 2012] HMMs with sparse coding 4 94.2% Lee and Cho [Lee and Cho 2011] Hierarchical HMMs 3 approx 80.0% Whitehed and Fox [Whitehead and Fox 2009] Standard HMMs 7 91.4% Nguyen et al [Nguyen-Duc-Thanh et al 2012] Two Stage HMMs 10 95.3% Chen et al [Chen et al 2003] HMMs with Fourier Descriptors 20 93.5% Pylvninen [Pylvninen 2005] HMMs without rotation data 10 99.76% Chung and Yang [Chung and Yang 2013] Threshold CRF 12 91.9% Yang et al Two Layer CRF 48 93.5% Yang and Lee [Yang and Lee 2010] HCRF with BoostMap embedding 24 87.3% Song et al [Song et al 2011] HCRF with temporal smoothing 10 93.7% Liu and Jia [Liu and Jia 2008] HCRF with manifold learning 10 97.8% Elmezain et al [Elmezain and Al-Hamadi 2012] LDCRF with depth camera 36 96.1% Song et al [Song et al 2012] LDCRF with filtering framework 24 75.4% Zhang et al Fuzzy LDCRF 5 91.8% Huang et al [Huang et al 2009] SVM with Gabor filters 11 95.2% Hsieh et al SVM with Fourier Descriptors 5 93.4% Hsieh et al [Hsieh and Liou 2012] SVM with Haar features 4 95.6% Dardas et al [Dardas and Georganas 2011] SVM with bag-of-words 10 96.2% Chen and Tseng [Chen and Tseng 2007] Fusing multiple SVMs 3 93.3% Rashid et al [Rashid et al 2009] Combining SVM with HMM 18 98.0% Yun and Peng [Yun and Peng 2009] Hu Moments with SVM 3 96.2% Ren and Zhang [Ren and Zhang 2009] SVM with min enclosing ball 10 92.9% Wu et al [Wu et al 2009] Frame-based descriptor with SVM 12 95.2% He et al [He et al 2008] SVM with Wavelet and FFT 17 87.4% Nisar et al [Nisar et al 2009] Decision trees 26 95.0% Jeon et al [Jeon et al 2009] Multivariate fuzzy decision trees 10 90.6% Zhang et al Decision Trees fused with HMMs 72 96.3% Fang et al [Fang et al 2003] Hierarchical Decision trees 14 91.6% Miranda et al [Miranda et al 2012] Decision forest with key pose learning 10 91.5% Keskin et al …”
Section: Experimentation and Accuracymentioning
confidence: 99%
“…Activity recognition is a field of research that investigates how to accurately detect different activities a person performs. Examples includes recognizing activities such as walking, sitting, standing, cooking and eating (van Kasteren and Krose, 2007) (Tran and Sorokin, 2008), recognizing gestures from video or motion sensor data (Whitehead and Fox, 2009) (Yin and Xie, 2007), or recognizing interaction between one or more persons or objects (Patterson et al, 2005) (Wu et al, 2007). The latter is also known as interaction detection.…”
Section: Introductionmentioning
confidence: 99%