We present an image-based technique to accelerate the navigation in complex static environments. We perform an image-space simpli cation of each sample of the scene taken at a particular viewpoint and dynamically combine these simpli ed samples to produce images for arbitrary viewpoints. Since the scene is converted into a bounded complexity representation in the image space, with the base images rendered beforehand, the rendering speed is relatively insensitive to the complexity of the scene. The proposed method correctly simulates the kinetic depth e ect parallax, occlusion, and can resolve the missing visibility information. This paper describes a suitable representation for the samples, a speci c technique for simplifying them, and di erent morphing methods for combining the sample information to reconstruct the scene. We use hardware texture mapping to implement the image-space warping and hardware a ne transformations to compute the viewpoint-dependent w arping function.
Reconciling scene realism with interactivity has emerged as one of the most important areas in making virtual reality feasible for large-scale mechanical CAD datasets consisting of several millions of primitives. This paper surveys our research and related work for achieving interactivity without sacrificing realism in virtual reality walkthroughs and flythroughs of polygonal CAD datasets. We outline our recent work on efficient generation of triangle strips from polygonal models that takes advantage of compression of connectivity information. This results in substantial savings in rendering, transmission, and storage. We outline our work on genus-reducing simplifications as well as real-time view-dependent simplifications that allow on-the-fly selection amongst multiple levels of detail, based upon lighting and viewing parameters. Our method allows multiple levels of detail to coexist on the same object at different regions and to merge seamlessly without any cracks or shading artifacts. We also present an overview of our work on hardware-assisted image-based rendering that allows interactive exploration of computer-generated scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.