We developed a 3D archive system for Japanese traditional performing arts. The system generates sequences of 3D actor models of the performances from multi-view video by using a graph-cuts algorithm and stores them with CG background models and related information. The system can show a scene from any viewpoint as follows; the 3D actor model is integrated with the background model and the integrated model is projected to a viewpoint that the user indicates with a viewpoint controller.A challenge of generating the actor models is how to reconstruct thin or slender parts. Japanese traditional costumes for performances include slender parts such as long sleeves, fans and strings that may be manipulated during the performance. The graph-cuts algorithm is a powerful 3D reconstruction tool but it tends to cut off those parts because it uses an energy-minimization process. Hence, the search for a way to reconstruct such parts is important for us to preserve these arts for future generations. We therefore devised an adaptive erosion method that works on the visual hull and applied it to the graph-cuts algorithm to extract interior nodes in the thin parts and to prevent the thin parts from being cut off. Another tendency of the reconstruction method using the graph-cuts algorithm is over-shrinkage of the reconstructed models. This arises because the energy can also be reduced by cutting inside the true surface. To avoid this tendency, we applied a silhouette-rim constraint defined by the number of the silhouette-rims passing through each node.By applying the adaptive erosion process and the silhouette-rim constraint, we succeeded in constructing a virtual performance with costumes including thin parts. This paper presents the results of the 3D reconstruction using the proposed method and some outputs of the 3D archive system.
The main purpose of our research was to generate the bullet time of dynamically moving subjects in 3D space or multiple shots of subjects within 3D space. In addition, we wanted to create a practical and generic bullet time system that required less time for advance preparation and generated bullet time in semi-real time after subjects had been captured that enabled sports broadcasting to be replayed. We developed a multi-viewpoint robotic camera system to achieve our purpose. A cameraman controls multiviewpoint robotic cameras to simultaneously focus on subjects in 3D space in our system, and captures multi-viewpoint videos. Bullet time is generated from these videos in semi-real time by correcting directional control errors due to operating errors by the cameraman or mechanical control errors by robotic cameras using directional control of virtual cameras based on projective transformation. The experimental results revealed our system was able to generate bullet time for a dynamically moving player in 3D space or multiple shots of players within 3D space in volleyball, gymnastics, and basketball in just about a minute. System preparation in calibrating the cameras in advance was finished in just about five minutes. Our system was utilized in the "ISU Grand Prix of Figure Skating 2013/2014, NHK Trophy" live sports program in November 2013. The bullet time of a dynamically moving skater on a large skating rink was generated in semi-real time using our system and broadcast in a replay just after the competition. Thus, we confirmed our bullet time system was more practical and generic.
We are developing an archive system that can preserve Japanese traditional dramatic arts, such as "Noh", in the form of dynamic 3D models. Dynamic 3D models are models that are generated from video images captured by multiple cameras surrounding a target object for each frame. The archive system can provide an entire Noh scene from any viewpoint by synthesizing dynamic 3D models with a computer graphics model of a Noh stage.Dynamic 3D models are generated using the graph-cut algorithm but it has a problem that thin parts of the object are cut off. As most of actors of Japanese traditional dramatic arts wear a traditional costume with long sleeves and a fan, it is important to reconstruct thin parts to preserve the arts. We therefore introduced the constraint imposed by the silhouette edges and the core obtained by adaptive erosion process on the volume intersection.We also propose the method of texture mapping that three texture images are blended, to suppress the flickers caused by the view dependent texture mapping.We describe our archive system with the proposed methods and present the effectiveness of the methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.