RiboVision is a visualization and analysis tool for the simultaneous display of multiple layers of diverse information on primary (1D), secondary (2D), and three-dimensional (3D) structures of ribosomes. The ribosome is a macromolecular complex containing ribosomal RNA and ribosomal proteins and is a key component of life responsible for the synthesis of proteins in all living organisms. RiboVision is intended for rapid retrieval, analysis, filtering, and display of a variety of ribosomal data. Preloaded information includes 1D, 2D, and 3D structures augmented by base-pairing, base-stacking, and other molecular interactions. RiboVision is preloaded with rRNA secondary structures, rRNA domains and helical structures, phylogeny, crystallographic thermal factors, etc. RiboVision contains structures of ribosomal proteins and a database of their molecular interactions with rRNA. RiboVision contains preloaded structures and data for two bacterial ribosomes (Thermus thermophilus and Escherichia coli), one archaeal ribosome (Haloarcula marismortui), and three eukaryotic ribosomes (Saccharomyces cerevisiae, Drosophila melanogaster, and Homo sapiens). RiboVision revealed several major discrepancies between the 2D and 3D structures of the rRNAs of the small and large subunits (SSU and LSU). Revised structures mapped with a variety of data are available in RiboVision as well as in a public gallery (). RiboVision is designed to allow users to distill complex data quickly and to easily generate publication-quality images of data mapped onto secondary structures. Users can readily import and analyze their own data in the context of other work. This package allows users to import and map data from CSV files directly onto 1D, 2D, and 3D levels of structure. RiboVision has features in rough analogy with web-based map services capable of seamlessly switching the type of data displayed and the resolution or magnification of the display. RiboVision is available at .
Traditional geospatial information visualizations often present views that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in for local inspection, spatial awareness and comparison between regions become limited. In our model, coordinated visualizations are integrated within individual probe interfaces, which depict the local data in user-defined regions-of-interest. Our probe concept can be incorporated into a variety of geospatial visualizations to empower users with the ability to observe, coordinate, and compare data across multiple local regions. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. We illustrate the effectiveness of our technique over traditional interfaces by incorporating it within three existing geospatial visualization systems: an agent-based social simulation, a census data exploration tool, and an 3D GIS environment for analyzing urban change over time. In each case, the probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users.
Mesh simplification and discrete levels of detail (LOD) are wellstudied areas of research in computer graphics. However, until recently, most of the developed algorithms have focused on simplification and viewing of a single object with a large number of polygons. When these algorithms are applied to a large collection of simple models, many objects may be completely erased, leading to results that are misleading to the viewer. In this paper, we present an approach to simplifying city-sized collections of 2.5D buildings based on the principles of "urban legibility" as defined by architects and city planners. We demonstrate that our method, although similar to traditional simplification methods when compared quantitatively, better preserves the legibility and understandability of a complex urban space at all levels of simplification.
Navigation and interaction in virtual environments that use stereoscopic head-tracked displays and have very large data sets present several challenges beyond those encountered with smaller data sets and simpler displays. First, zooming by approaching or retreating from a target must be augmented by integrating scale as a seventh degree of freedom. Second, in order to maintain good stereoscopic imagery, the interface must: maintain stereo image pairs that the user perceives as a single 3D image, minimize loss of perceived depth since stereoscopic imagery cannot properly occlude the screen's frame, provide maximum depth information, and place objects at distances where they are best manipulated. Finally, the navigation interface must work when the environment is displayed at any scale. This paper addresses these problems for god's-eye-view or third person navigation of a specific large-scale virtual environment: a high-resolution terrain database covering an entire planet.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.