SecondSkin estimates an appearance model for an object visible in a video sequence, without the need for complex interaction or any calibration apparatus. This model can then be transferred to other objects, allowing a nonexpert user to insert a synthetic object into a real video sequence so that its appearance matches that of an existing object, and changes appropriately throughout the sequence. As the method does not require any prior knowledge about the scene, the lighting conditions, or the camera, it is applicable to video which was not captured with this purpose in mind. However, this lack of prior knowledge precludes the recovery of separate lighting and surface reflectance information. The SecondSkin appearance model therefore combines these factors. The appearance model does require a dominant light-source direction, which we estimate via a novel process involving a small amount of user interaction. The resulting model estimate provides exactly the information required to transfer the appearance of the original object to new geometry composited into the same video sequence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.