Fig. 1. Volume renderings of the tooth data set using transfer functions obtained with different target distributions. From left to right, the target distributions used are occurrence weighted by intensity, occurrence weighted by importance (1 for enamel and 0.5 for the rest), occurrence weighted by gradient, and occurrence weighted by importance using a mask of the nerve.Abstract-In this paper we present a framework to define transfer functions from a target distribution provided by the user. A target distribution can reflect the data importance, or highly relevant data value interval, or spatial segmentation. Our approach is based on a communication channel between a set of viewpoints and a set of bins of a volume data set, and it supports 1D as well as 2D transfer functions including the gradient information. The transfer functions are obtained by minimizing the informational divergence or Kullback-Leibler distance between the visibility distribution captured by the viewpoints and a target distribution selected by the user. The use of the derivative of the informational divergence allows for a fast optimization process. Different target distributions for 1D and 2D transfer functions are analyzed together with importance-driven and view-based techniques.
This paper introduces a volume rendering framework based on the information channel constructed between the volumetric data set and a set of viewpoints. From this channel, the information associated to each voxel can be interpreted as an ambient occlusion value that allows to obtain illustrative volume visualizations. The use of the voxel information combined with the assignation of color to each viewpoint and non-photorealistic effects produces an enhanced visualization of the volume data set. Voxel information is also applied to modulate the transfer function and to select the most informative views.
Abstract-In this paper, we present a new framework for multimodal volume visualization that combines several informationtheoretic strategies to define both colors and opacities of the multimodal transfer function. To the best of our knowledge, this is the first fully automatic scheme to visualize multimodal data. To define the fused color, we set an information channel between two registered input data sets and, afterwards, we compute the informativeness associated with the respective intensity bins. This informativeness is used to weight the color contribution from both initial 1D transfer functions. To obtain the opacity, we apply an optimization process that minimizes the informational divergence between the visibility distribution captured by a set of viewpoints and a target distribution proposed by the user. This distribution is defined either from the data set features, from manually set importances, or from both. Other problems related to the multimodal visualization, such as the computation of the fused gradient and the histogram binning, have also been solved using new information-theoretic strategies. The quality and performance of our approach is evaluated on different data sets.
Different quality metrics have been proposed in the literature to evaluate how well a visualization represents the underlying data. In this paper, we present a new information-theoretic framework that quantifies the information transfer between the source data set and the rendered image. This approach is based on the definition of an observation channel whose input and output are given by the intensity values of the volumetric data set and the pixel colors, respectively. From this channel, the mutual information, a measure of information transfer or correlation between the input and the output, is used as a metric to evaluate the visualization quality. The usefulness of the proposed observation channel is illustrated with three fundamental visualization applications: selection of informative viewpoints, transfer function design, and light positioning.
Abstract. Exploded views are often used in illustration to overcome the problem of occlusion when depicting complex structures. In this paper, we propose a volume visualization technique inspired by exploded views that partitions the volume into a number of parallel slabs and shows them apart from each other. The thickness of slabs is driven by the similarity between partitions. We use an information-theoretic technique for the generation of exploded views. First, the algorithm identifies the viewpoint which gives the most structured view of the data. Then, the partition of the volume into the most informative slabs for exploding is obtained using two complementary similarity-based strategies. The number of slabs and the similarity parameter are freely adjustable by the user.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.