Overview We present an image-based approach for capturing the appearance of a walking or running person so they can be rendered realistically under variable viewpoint and illumination. Considerable work has addressed aspects of postproduction control of viewpoint and illumination of a human performance. Most proposed systems address only one of those two aspects e.g. [Wilburn et al. 2005], [Wenger et al. 2005]. [Theobalt et al. 2005] addressed control both of the viewpoint and illumination, however the approach is challenge by low sampling of both lighting and view dimensions. We take a step toward an image-based approach to obtaining postproduction control over both viewpoint and illumination of cyclic full-body human motion by combining the performance relighting technique of [Wenger et al. 2005] with a novel view generation technique based on a flowed reflectance field. By restricting our consideration to cyclic motion such as walking and running, we are able to acquire a 2D array of views by slowly rotating the subject in front of a 1D vertical array of three high speed cameras and segmenting the data per motion cycle. We then use a combination of light field rendering and view interpolation based on optical flow to render the subject from new viewpoints. Capture The data were captured using an 8m lighting apparatus related to [Wenger et al. 2005] but designed for full human body capture ( Fig. 1(a)). We programmed the device with a 33-frame lighting sequence with 26 basis lighting conditions, three evenlyspaced tracking frames, three corresponding matte frames and a stripe frame (not used at present). We capture the subject using a high speed camera with the lighting basis running at 30 fps/s that give us a a total frequency of 990Hz. . At this frame rate, we can capture 36 motion cycles at 320x448 pixels within the 8 GB memory capacity of the high speed camera. For each motion cycle we have 32 and 25 lighting sequences for walking and running data sets respectively. The 36 locomotion cycles recorded by the three cameras,if assumed identical, yield 108 relightable views of the walk cylcle. * e-mail: {last name}@ict.usc.eduRegistration Once the data is captured we first compute the alpha channels and pre-matte the dataset; this increases the compression ratio and also helps the optical flow algorithm. We then compute optical flow to spatially register the dataset, similar to [Wilburn et al. 2005]. The images are temporally registered in each lighting sequence using a process similar to [Wenger et al. 2005]. The registration is performed by warping all the frames within a lighting sequence toward the tracking frame in the middle of the sequence. After registration we have the equivalent of a 36×3 grid ( Fig. 1 (b)) of 4D reflectance fields. To create the flowed reflectance field, we compute flow fields between each viewpoint and its 4-neighbors.Rendering Our rendering process consists of five steps: lighting, image warping, light field interpolation, shadow rendering, and compositing. We first relight the ref...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.