Multimodal visualization aims at fusing different data sets so that the resulting combination provides more information and understanding to the user. To achieve this aim, we propose a new information-theoretic approach that automatically selects the most informative voxels from two volume data sets. Our fusion criteria are based on the information channel created between the two input data sets that permit us to quantify the information associated with each intensity value. This specific information is obtained from three different ways of decomposing the mutual information of the channel. In addition, an assessment criterion based on the information content of the fused data set can be used to analyze and modify the initial selection of the voxels by weighting the contribution of each data set to the final result. The proposed approach has been integrated in a general framework that allows for the exploration of volumetric data models and the interactive change of some parameters of the fused data set. The proposed approach has been evaluated on different medical data sets with very promising results.
Abstract-In this paper, we present a new framework for multimodal volume visualization that combines several informationtheoretic strategies to define both colors and opacities of the multimodal transfer function. To the best of our knowledge, this is the first fully automatic scheme to visualize multimodal data. To define the fused color, we set an information channel between two registered input data sets and, afterwards, we compute the informativeness associated with the respective intensity bins. This informativeness is used to weight the color contribution from both initial 1D transfer functions. To obtain the opacity, we apply an optimization process that minimizes the informational divergence between the visibility distribution captured by a set of viewpoints and a target distribution proposed by the user. This distribution is defined either from the data set features, from manually set importances, or from both. Other problems related to the multimodal visualization, such as the computation of the fused gradient and the histogram binning, have also been solved using new information-theoretic strategies. The quality and performance of our approach is evaluated on different data sets.
Different quality metrics have been proposed in the literature to evaluate how well a visualization represents the underlying data. In this paper, we present a new information-theoretic framework that quantifies the information transfer between the source data set and the rendered image. This approach is based on the definition of an observation channel whose input and output are given by the intensity values of the volumetric data set and the pixel colors, respectively. From this channel, the mutual information, a measure of information transfer or correlation between the input and the output, is used as a metric to evaluate the visualization quality. The usefulness of the proposed observation channel is illustrated with three fundamental visualization applications: selection of informative viewpoints, transfer function design, and light positioning.
How to extract relevant information from large data sets has become a main challenge in data visualization. Clustering techniques that classify data into groups according to similarity metrics are a suitable strategy to tackle this problem. Generally, these techniques are applied in the data space as an independent step previous to visualization. In this paper, we propose clustering on the perceptual space by maximizing the mutual information between the original data and the final visualization. With this purpose, we present a new information-theoretic framework based on the rate-distortion theory that allows us to achieve a maximally compressed data with a minimal signal distortion. Using this framework, we propose a methodology to design a visualization process that minimizes the information loss during the clustering process. Three application examples of the proposed methodology in different visualization techniques such as scatterplot, parallel coordinates, and summary trees are presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.