Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662)
DOI: 10.1109/cvpr.2000.855885
|View full text |Cite
|
Sign up to set email alerts
|

Reconstruction of articulated objects from point correspondences in a single uncalibrated image

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
50
0

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 115 publications
(50 citation statements)
references
References 10 publications
0
50
0
Order By: Relevance
“…A common approach with the former representation is to "lift" 2D keypoints (either ground truth or from a 2D pose detector) to 3D. This has been recently done with neural networks [28,57,31] and previously using a dictionary of 3D skeletons [38,2,59,54] or other priors [47,50,2] to constrain the problem. The point cloud representation also allows one to train a CNN to regress directly from an image (instead of 2D keypoints) to 3D joints using supervision from motion capture datasets like Human 3.6M [35,41,34].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…A common approach with the former representation is to "lift" 2D keypoints (either ground truth or from a 2D pose detector) to 3D. This has been recently done with neural networks [28,57,31] and previously using a dictionary of 3D skeletons [38,2,59,54] or other priors [47,50,2] to constrain the problem. The point cloud representation also allows one to train a CNN to regress directly from an image (instead of 2D keypoints) to 3D joints using supervision from motion capture datasets like Human 3.6M [35,41,34].…”
Section: Related Workmentioning
confidence: 99%
“…A key challenge with 3D pose estimation in-the-wild is the lack of ground truth for people performing arbitrary, unconstrained actions in-the-wild (as typically found on images scraped from the internet). However, a suitable proxy for 3D pose estimation quality is ordinal depth [47,34] i.e. given two keypoints, predict the relative depth ordering by specifying which keypoint is in front of the other.…”
Section: Ordinal Depthmentioning
confidence: 99%
See 1 more Smart Citation
“…Our method firstly utilizes [ZFL * 10, CGZZ] to estimate mannequin 3D pose and shape (Figure 2(b)) from the input image. The 3D pose is recovered by a semi-automatic pose estimation method [Tay00] using the user-specified 2D joints. The recovered 3D orientations and rotations of skeletal bones can be interactively refined by users.…”
Section: Garment Initializationmentioning
confidence: 99%
“…where L is one point in the oriented facet F li , L li J is the 3D joints for bone l i recovered by the semi-automatic pose estimation method [Tay00], and n li is the normal of the the oriented facet F li . R li is the 3-by-3 rotation matrix of bone l i calculated by using the absolute angles of the recovery pose.…”
Section: Garment Initializationmentioning
confidence: 99%