2013
DOI: 10.1007/978-3-642-41190-8_49
|View full text |Cite
|
Sign up to set email alerts
|

Space-Time Pose Representation for 3D Human Action Recognition

Abstract: Abstract. 3D human action recognition is an important current challenge at the heart of many research areas lying to the modeling of the spatio-temporal information. In this paper, we propose representing human actions using spatio-temporal motion trajectories. In the proposed approach, each trajectory consists of one motion channel corresponding to the evolution of the 3D position of all joint coordinates within frames of action sequence. Action recognition is achieved through a shape trajectory representatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
52
0
4

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 53 publications
(58 citation statements)
references
References 13 publications
0
52
0
4
Order By: Relevance
“…-Random selection: the number of clusters required by the bag of key poses method is selected randomly within the interval [4,26] for the subsets AS1 and AS2 and the interval [44,76] for AS3. All the skeleton joints and training instances are included in the processing.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…-Random selection: the number of clusters required by the bag of key poses method is selected randomly within the interval [4,26] for the subsets AS1 and AS2 and the interval [44,76] for AS3. All the skeleton joints and training instances are included in the processing.…”
Section: Resultsmentioning
confidence: 99%
“…Kinect sensor provides 3D coordinates of 20 skeleton joints, thus motion trajectories in a 60-dimensional space can be associated to human motion [44]. A trajectory is the evolution of the positions of joint coordinates along a sequence of frames related to an action.…”
Section: Related Work On Rgb-d Sensorsmentioning
confidence: 99%
“…4. Specifically, given K human joints with [175] Vector of Joints Conc Lowlv Hand Patsadu et al [176] Vector of Joints Conc Lowlv Hand Huang and Kitani [177] Cost Topology Stat Lowlv Hand Devanne et al [178] Motion Units Conc Manif Hand Wang et al [179] Motion Poselets BoW Body Dict Wei et al [180] Structural Prediction Conc Lowlv Hand Gupta et al [181] 3D Pose w/o Body Parts Conc Lowlv Hand Amor et al [182] Skeleton's Shape Conc Manif Hand Sheikh et al [183] Action Space Conc Lowlv Hand Yilma and Shah [184] Multiview Geometry Conc Lowlv Hand Gong et al [185] Structured Time Conc Manif Hand Rahmani and Mian [186] Knowledge Transfer BoW Lowlv Dict Munsell et al [187] Motion Biometrics Stat Lowlv Hand Lillo et al [188] Composable Activities BoW Lowlv Dict Wu et al [189] Watch-n-Patch BoW Lowlv Dict Gong and Medioni [190] Dynamic Manifolds BoW Manif Dict Han et al [191] Hierarchical Manifolds BoW Manif Dict Slama et al [192,193] Grassmann Manifolds BoW Manif Dict Devanne et al [194] Riemannian Manifolds Conc Manif Hand Huang et al [195] Shape Tracking Conc Lowlv Hand Devanne et al [196] Riemannian Manifolds Conc Manif Hand Zhu et al [197] RNN with LSTM Conc Lowlv Deep Chen et al [198] EnwMi Learning BoW Lowlv Dict Hussein et al [199] Covariance of 3D Joints Stat Lowlv Hand Shahroudy et al [200] MMMP BoW Body Unsup Jung and Hong [201] Elementary Moving Pose BoW Lowlv Dict Evangelidis et al [202] Skeletal Quad Conc Lowlv Hand Azary and Savakis [203] Grassmann Manifolds Conc Manif Hand Barnachon et al [204] Hist. of Action Poses Stat Lowlv Hand Shahroudy et al [205] Feature Fusion BoW Body Unsup Cavazza et al [206] Kernelized-COV Stat Lowlv Hand …”
Section: Representations Based On Raw Joint Positionsmentioning
confidence: 99%
“…As before, we use 60% of examples for training and the rest for testing. We use features based on HOG descriptor as shown in (30) for this dataset. Using the multi-level HDP-HMM with discriminative learning, we report an overall classification accuracy of 81.2%, 78.1% and 90.6% for the three sets respectively.…”
Section: Msr-action3d Datasetmentioning
confidence: 99%