Abstract-3D video is a real 3D movie recording the object's full 3D shape, motion, and precise surface texture. This paper first proposes a parallel pipeline processing method for reconstructing dynamic 3D object shape from multi-view video images, by which a temporal series of full 3D voxel representations of the object behavior can be obtained in real-time. To realize the real-time processing, we first introduce a plane-based volume intersection algorithm: represent an observable 3D space by a group of parallel plane slices, back-project observed multi-view object silhouettes onto each slice, and apply 2D silhouette intersection on each slice. Then, we propose a method to parallelize this algorithm using a PC cluster, where we employ 5 stage pipeline processing in each PC as well as slice-by-slice parallel silhouette intersection. Several results of the quantitative performance evaluation are given to demonstrate the effectiveness of the proposed methods. In the latter half of the paper, we present an algorithm of generating video texture on the reconstructed dynamic 3D object surface. We first describe a naive viewindependent rendering method and show its problems. Then, we improve the method by introducing image-based rendering techniques. Experimental results demonstrate the effectiveness of the improved method in generating high fidelity object images from arbitrary viewpoints.