Abstract:We present hierarchical occlusion maps (HOM) for visibility culling on complex models with high depth complexity. The culling algorithm uses an object space bounding volume hierarchy and a hierarchy of image space occlusion maps. Occlusion maps represent the aggregate of projections of the occluders onto the image plane. For each frame, the algorithm selects a small set of objects from the model as occluders and renders them to form an initial occlusion map, from which a hierarchy of occlusion maps is built. The occlusion maps are used to cull away a portion of the model not visible from the current viewpoint. The algorithm is applicable to all models and makes no assumptions about the size, shape, or type of occluders. It supports approximate culling in which small holes in or among occluders can be ignored. The algorithm has been implemented on current graphics systems and has been applied to large models composed of hundreds of thousands of polygons. In practice, it achieves significant speedup in interactive walkthroughs of models with high depth complexity.
We investigate the response of multi-walled carbon nanotubes to mechanical strain applied with an Atomic Force Microscope (AFM) probe. We find that in some samples, changes in the contact resistance dominate the measured resistance change. In others, strain large enough to fracture the tube can be applied without a significant change in the contact resistance. In this case we observe that enough force is applied to break the tube without any change in resistance until the tube fails. We have also manipulated the ends of the broken tube back in contact with each other, re-establishing a finite resistance. We observe that in this broken configuration the resistance of the sample is tunable to values 15-350 kΩ greater than prior to breaking. Soon after their discovery by Iijima, carbon nanotubes 1 (CNTs) were predicted to have interesting electrical
Abstract:Many applications in computer graphics and virtual environments need to render datasets with large numbers of primitives and high depth complexity at interactive rates. However, standard techniques like view frustum culling and a hardware z-buffer are unable to display datasets composed of hundred of thousands of polygons at interactive frame rates on current high-end graphics systems. We add a "conservative'' visibility culling stage to the rendering pipeline, attempting to identify and avoid processing of occluded polygons. Given a moving viewpoint, the algorithm dynamically chooses a set of occluder3. Each occluder is used to compute a 3h.adow frustum, and all primitives contained within this frustum are culled. The algorithm hierarchically traverses the model, culling out parts not visible from the current viewpoint using efficient, robust, and in some cases specialized interference detection algorithms. The algorithm's performance varies with the location of the viewpoint and the depth complexity of the model. In the worst case it is linear in the input size with a small constant. In this paper, we demonstrate its performance on a city model composed of 500,000 polygons and possessing varying depth complexity. We are able to cull an average of 55% of the polygons that would not be culled by view-frustum culling and obtain a commensurate improvement in frame rate. The overall approach is ejJecti11e and •calable, is applicable to all polygonal models, and can be easily implemented on top of view-frustum culling.
The Virtual-Reality Peripheral Network (VRPN) system provides a device-independent and network-transparent interface to virtualreality peripherals. VRPN's application of factoring by function and of layering in the context of devices produces an interface that is novel and powerful. VRPN also integrates a wide range of known advanced techniques into a publicly-available system. These techniques benefit both direct VRPN users and those who implement other applications that make use of VR peripherals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.