We present a new approach to realistic hand modeling and deformation with real-time performance. We model the underlying shape of a human hand by means of sweeps which follow a simplified skeleton. The resulting swept surfaces are blended, and an auxiliary surface is then bound to the swept representation in the palm region. In the areas of this palm-control surface where bulges occur in certain poses of a real hand, the vertices are given their own trajectories, so that the palm forms realistic shapes as the joints bend. Palm lines can also be modeled as valleys in the skin by sketching them on a displacement map on the palm-control surface, and activitating them when appropriate joint movements take place. Self-intersections and collisions are detected using geometric primitives that are automatically generated from, and deform with, the sweeps and palm surface. Our algorithm runs in real time, and the naturalism of its results are demonstrated by comparative images of modeled and real hands, including several challenging poses.
An efficient surface area evaluation method is introduced by using smooth surface reconstruction for three-dimensional scanned human body data. Surface area evaluations for various body parts are compared with the results from the traditional alginate-based method, and quite high similarity between the two results is obtained. We expect that our surface area evaluation method can be an alternative to measuring surface area by the cumbersome alginate method.
The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.