2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
DOI: 10.1109/cvpr.2016.535
|View full text |Cite
|
Sign up to set email alerts
|

A Dual-Source Approach for 3D Pose Estimation from a Single Image

Abstract: One major challenge for 3D pose estimation from a single RGB image is the acquisition of sufficient training data. In particular, collecting large amounts of training data that contain unconstrained images and are annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of images with annotated 2D poses and the second source consists of accurate 3D motion capture data. To integrate both sources, we propose a dual-source approach tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
205
1

Year Published

2017
2017
2019
2019

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 183 publications
(209 citation statements)
references
References 32 publications
3
205
1
Order By: Relevance
“…The second category [1,29,4,15,10,6,25,17] decouples 3D pose estimation into the well-studied 2D joint detection [18,24] and 3D pose estimation from the detected 2D joints. Akhter et al [1] propose a multi-stage approach to estimate the 3D pose from 2D joints using an over-complete dictionary of poses.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The second category [1,29,4,15,10,6,25,17] decouples 3D pose estimation into the well-studied 2D joint detection [18,24] and 3D pose estimation from the detected 2D joints. Akhter et al [1] propose a multi-stage approach to estimate the 3D pose from 2D joints using an over-complete dictionary of poses.…”
Section: Related Workmentioning
confidence: 99%
“…Bogo et al [4] estimate 3D pose by first fitting a statistical body shape model to the 2D joints, and then minimizing the error between the reprojected 3D model and detected 2D joints. Chen [6] and Yasin [25] regard 3D pose estimation as a matching between the estimated 2D pose and the 3D pose from a large pose library. Martinez et al [15] design a simple fully connected residual network to regress 3D pose from 2D joint detections.…”
Section: Related Workmentioning
confidence: 99%
“…Sminchisescu further utilized temporal consistency to propagate pose probabilities with a Bayesian mixture of experts Markov model [2007]. Relying on the recent advances in machine learning techniques and compute capabilities, approaches for direct 3D pose regression from the input image have been proposed, using structured learning of latent pose [Li et al 2015a;Tekin et al 2016a], joint prediction of 2D and 3D pose [Li and Chan 2014;Tekin et al 2016b;Yasin et al 2016], transfer of features from 2D datasets [Mehta et al 2016], novel pose space formulations [Pavlakos et al 2016] and classification over example poses [Pons-Moll et al 2014;Rogez and Schmid 2016]. Relative per-bone predictions [Li and Chan 2014], kinematic skeleton models , or root centered joint positions [Ionescu et al 2014a] are used as the eventual output space.…”
Section: Multi-viewmentioning
confidence: 99%
“…Given 2D joint locations, lifting them to 3D pose is challenging. Existing approaches use bone length and depth ordering constraints [Mori and Malik 2006;Taylor 2000], sparsity assumptions [Wang et al 2014;Zhou et al 2015,a], joint limits [Akhter and Black 2015], inter-penetration constraints [Bogo et al 2016], temporal dependencies [Rhodin et al 2016b], and regression [Yasin et al 2016]. Treating 3D pose as a hidden variable in 2D estimation is an alternative [Brau and Jiang 2016].…”
Section: Multi-viewmentioning
confidence: 99%
“…We follow the standard steps to align the 3D pose prediction with the groundtruth by aligning the position of the central hip joint, and use the Mean Per-Joint Position Error (MPJPE) between the groundtruth and the prediction as evaluation metrics. In some prior works [10], [41], [54], the pose prediction was further aligned with the groundtruth via a rigid transformation. The resulting MPJPE is termed as Procrustes Aligned (PA) MPJPE.…”
Section: Datasets and Evaluation Protocolsmentioning
confidence: 99%