Characterizing the variations of the human body shape is fundamentally important in many applications ranging from animation to product design. 3D scanning technology makes it possible to digitize the complete surfaces of a large number of human bodies, providing much richer information about the body shape than traditional anthropometric measurements. This technology opens up opportunities to extract new measurements for quantifying the body shape. In this paper, we present a new method for extracting the main modes of variations of the human shape from a 3D anthropometric database. Previous approaches rely on anatomical landmarks. Using a volumetric representation, we show that human shape analysis can be performed despite the lack of such information. We first introduce a technique for repairing the 3D models from the original scans. Principal components analysis is then applied to the volumetric description of a set of human models to extract dominant components of shape variability for a target population. We demonstrate a good reconstruction of the original models from a reduced number of components. Finally, we provide tools for visualizing the main modes of human shape variation.
We present an approach to find dense point-to-point correspondences between two deformed surfaces corresponding to different postures of the same non-rigid object in a fully automatic way. The approach requires no prior knowledge about the shapes being registered or the initial alignment of the shapes. We consider surfaces that are represented by possibly incomplete triangular meshes. We model the deformations of an object as isometries. To solve the correspondence problem, our approach maps the intrinsic geometries of the surfaces into a low-dimensional Euclidean space via multi-dimensional scaling. This results in posture-invariant shapes that can be registered using rigid correspondence algorithms.
We present an algorithm to predict landmarks on 3D human scans in varying poses. Our method is based on learning bending-invariant landmark properties. We also learn the spatial relationships between pairs of landmarks using canonical forms. The information is modeled by a Markov network, where each node of the network corresponds to a landmark position and where each edge of the network represents the spatial relationship between a pair of landmarks. We perform probabilistic inference over the Markov network to predict the landmark locations on human body scans in varying poses. We evaluated the algorithm on 200 models with different shapes and poses. The results show that most landmarks are predicted well.
We propose a posture invariant surface descriptor for triangular meshes. Using intrinsic geometry, the surface is first transformed into a representation that is independent of the posture. Spin image is then adapted to derive a descriptor for the representation. The descriptor is used for extracting surface features automatically. It is invariant with respect to rigid and isometric deformations, and robust to noise and changes in resolution. The result is demonstrated by using the automatically extracted features to find correspondences between articulated meshes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.