2015
DOI: 10.1016/j.neucom.2013.10.046
|View full text |Cite
|
Sign up to set email alerts
|

Skeleton-based action recognition with extreme learning machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0
2

Year Published

2015
2015
2018
2018

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(31 citation statements)
references
References 26 publications
0
29
0
2
Order By: Relevance
“…A depth motion maps (DMM)-based human action recognition method using l2-regularized collaborative representation classifier is introduced. Third, method in [28] skeleton joint position information with temporal difference is produced as final feature, and extreme learning machine is used for action recognition. The comparison results are listed in Table 2.…”
Section: B Comparison With Other Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…A depth motion maps (DMM)-based human action recognition method using l2-regularized collaborative representation classifier is introduced. Third, method in [28] skeleton joint position information with temporal difference is produced as final feature, and extreme learning machine is used for action recognition. The comparison results are listed in Table 2.…”
Section: B Comparison With Other Methodsmentioning
confidence: 99%
“…By comparison, it can be seen that our scheme outperforms the approaches published in [13] in all three test cases. For the challenging cross subject test, algorithm in [28] produces better results on AS2 and AS3. The most probable reason for this may be that actions in the two subsets are more complicated and the proposed accurate joint position information can effectively solve the problems of high intra-class variability and inter-class similarity.…”
Section: B Comparison With Other Methodsmentioning
confidence: 99%
“…In cases when hardware returns directly spatial coordinates of the body joints authors prefer to use feature that are derived from those coordinates. Among those features are various configurations of angle-based [6,10,46] and coordinate-based features [2,12]. Basing on our previous researches [26] the angle based-features gives better recognition results than coordinate -based.…”
Section: Application For Human Actions Recognitionmentioning
confidence: 99%
“…Our DTW classifier works on MoCap data recalculated according to (12). Each sample of recording is 12 -dimensional vector with angle values.…”
Section: Application For Human Actions Recognitionmentioning
confidence: 99%
“…Kinect iskelet eklem yapısı (Kinect skeletal joint structure) [22] Eylem tanıma için 3B eklem koordinatları, iki adet ( ve ) 2B eklem koordinatlarına indirgenmesine dayalı bir model oluşturulmuştur [23,24] Seçilen ve eksenleri için 30°'lik açı ile oluşturulan model ve modeldeki bölgeler Şekil 2'de gösterilmiştir. Shotton yöntemine göre, eklem noktalarından kalça merkezi eklemi referans alınmıştır [25,26].…”
Section: İskelet Eklemlerinden öZnitelik çıKarılması (Feature Extractunclassified