ACM SIGGRAPH 2004 Sketches on - SIGGRAPH '04 2004
DOI: 10.1145/1186223.1186260
|View full text |Cite
|
Sign up to set email alerts
|

Skeletal parameter estimation from optical motion capture data

Abstract: In this paper we present an algorithm for automatically estimating a subject '

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
48
0

Year Published

2008
2008
2018
2018

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(48 citation statements)
references
References 14 publications
0
48
0
Order By: Relevance
“…Since pose estimation is much better-posed in 2D than in 3D, a popular way to infer joint positions is to use a generative model to find a 3D pose whose projection aligns with the 2D image data. In the past, this usually involved inferring a 3D human pose by optimizing an energy function derived from image information, such as silhouettes [6,14,21,22,25,31,44,49,60], trajectories [74], feature descriptors [58,62,63] and 2D joint locations [2,3,5,20,36,51,57,68,69]. Another class of approaches retrieve the pose from a dictionary of 3D poses based on similarity with the 2D image evidence [18,26,39,41,42].…”
Section: Related Workmentioning
confidence: 99%
“…Since pose estimation is much better-posed in 2D than in 3D, a popular way to infer joint positions is to use a generative model to find a 3D pose whose projection aligns with the 2D image data. In the past, this usually involved inferring a 3D human pose by optimizing an energy function derived from image information, such as silhouettes [6,14,21,22,25,31,44,49,60], trajectories [74], feature descriptors [58,62,63] and 2D joint locations [2,3,5,20,36,51,57,68,69]. Another class of approaches retrieve the pose from a dictionary of 3D poses based on similarity with the 2D image evidence [18,26,39,41,42].…”
Section: Related Workmentioning
confidence: 99%
“…The skeleton data, containing the 3-D position vectors of a set of key joints in each frame, can be extracted by some low-cost RGB-D sensors [26] (Kinect, Realsense, etc.) or motion capture system [27]. On the other hand, some works can achieve similar action recognition from depth images [19], [34] capturing the point clouds of the human body and background in 3-D space.…”
Section: Extensions On Multiple Rigid Bodiesmentioning
confidence: 99%
“…geometric features of pixels which capture the point clouds of the human body and the background in 3-D space [18]. The 3-D skeleton of a human body captured by RGB-D sensors [26] or motion capture system [27] also have been intensively studied in human action representations due to the robustness to variations of viewpoint, human body scale and motion speed as well as the real-time performance [20], [21]. In this paper, we extend the RRV descriptor to multiple rigid bodies for skeleton-based human action recognition.…”
mentioning
confidence: 99%
“…The information from the markers can be useful to obtain the kinematic characteristics of human. In [4] an algorithm for automatically estimating a subject's skeletal structure from optical motion capture data is presented. This algorithm defined the cluster of markers into segment groups, determines the topological connectivity between these groups and locates the positions of their connecting joints.…”
Section: A Human To Humanoid Motion -State Of the Artmentioning
confidence: 99%