We propose a deep learning based framework that learns data-driven temporal priors to perform 3D human pose estimation from six body worn Magnetic Inertial Measurement units sensors. Our work estimates 3D human pose with associated uncertainty from sparse body worn sensors. We derive and implement a 3D angle representation that eliminates yaw angle (or magnetometer dependence) and show that 3D human pose is still obtained from this reduced representation, but with enhanced uncertainty. We do not use kinematic acceleration as input and show that it improves the generalization to real sensor data from different subjects as well as accuracy. Our framework is based on Bi-directional recurrent autoencoder. A sliding window is used at inference time, instead of full sequence (offline mode). The major contribution of our research is that 3D human pose is predicted from sparse sensors with a well calibrated uncertainty which is correlated with ambiguity and actual errors. We have demonstrated our results on two real sensor datasets; DIP-IMU and Total capture and have come up with state-of-art accuracy. Our work confirms that the main limitation of sparse sensor based 3D human pose prediction is the lack of temporal priors. Therefore fine-tuning on a small synthetic training set of target domain, improves the accuracy.
Realistic estimation and synthesis of articulated human motion must satisfy anatomical constraints on joint angles. A data-driven approach is used to learn human joint limits from 3D motion capture datasets. We represent joint constraints with a new formulation (s 1 , s 2 , τ) using swing-twist representation in exponential maps form. Our parameterization is applied on Human3.6M dataset to create the lookup-map for each joint. These maps enable us to generate 'synthetic' datasets in entire joint rotation space of a given joint. A set of neural network discriminators is then trained with synthetic datasets to learn valid/invalid joint rotations. The discriminators achieve accuracy of [ 94.4 − 99.4%] for different joints. We validate precision-accuracy trade-off of discriminators and qualitatively evaluate classified poses with an interactive tool. The learned discriminators can be used as 'priors' for human pose estimation and motion synthesis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.