Viewpoint selection is an emerging area in computer graphics with applications in fields such as scene exploration, image-based modeling, and volume visualization. In particular, best view selection algorithms are used to obtain the minimum number of views (or images) in order to understand or model an object or scene better. In this article, we present a unified framework for viewpoint selection and mesh saliency based on the definition of an information channel between a set of viewpoints (input) and the set of polygons of an object (output). The mutual information of this channel is shown to be a powerful tool to deal with viewpoint selection, viewpoint stability, object exploration and viewpoint-based saliency. In addition, viewpoint mutual information is extended using saliency as an importance factor, showing how perceptual criteria can be incorporated to our method. Although we use a sphere of viewpoints around an object, our framework is also valid for any set of viewpoints in a closed scene. A number of experiments demonstrate the robustness of our approach and the good behavior of the proposed measures. ACM Reference Format:Feixas, M., Sbert, M., and González, F. 2009. A unified information-theoretic framework for viewpoint selection and mesh saliency.
Abstract-This paper introduces a concept for automatic focusing on features within a volumetric data set. The user selects a focus, i.e., object of interest, from a set of pre-defined features. Our system automatically determines the most expressive view on this feature. A characteristic viewpoint is estimated by a novel information-theoretic framework which is based on the mutual information measure. Viewpoints change smoothly by switching the focus from one feature to another one. This mechanism is controlled by changes in the importance distribution among features in the volume. The highest importance is assigned to the feature in focus. Apart from viewpoint selection, the focusing mechanism also steers visual emphasis by assigning a visually more prominent representation. To allow a clear view on features that are normally occluded by other parts of the volume, the focusing for example incorporates cut-away views.
I n 1928, George D. Birkhoff formalized the aesthetic measure of an object as the quotient between order and complexity (see also the "Related Work" sidebar). 1 From Birkhoff's work, Max Bense, 2 together with Abraham Moles, 3 developed informational aesthetics (or information-theoretic aesthetics from the original German term), which defines the concepts of order and complexity from Shannon's notion of information. 4 As Birkhoff stated, formalizing these concepts, which depend on the context, author, observer, and so on, is difficult. Scha and Bod claimed that in spite of these measures' simplicity, "if we integrate them with other ideas from perceptual psychology and computational linguistics, they may in fact constitute a starting point for the development of more adequate formal models." 5 The creative process generally produces order from disorder. Bense proposed a general schema that characterizes artistic production by the transition from the repertoire to the final product. He assigned a complexity to the repertoire, or palette, and an order to the distribution of its elements on the artistic product.This article, an extended and revised version of earlier work, 6 presents a set of measures that conceptualizes Birkhoff's aesthetic measure from an informational viewpoint. These measures describe complementary aspects of the aesthetic experience and are normalized for comparison. We show the measures' behavior using three sets of paintings representing different styles that cover a representative feature range: from randomness to order. Our experiments show that both global and compositional measures extend Birkhoff's measure and help us understand and quantify the creative process. Information theory and Kolmogorov complexitySome basic notions of information theory, 4 Kolmogorov complexity, 7 and physical entropy 8 serve as background for our work.
In the last decade a new family of methods, namely Image-Based Rendering, has appeared. These techniques rely on the use of precomputed images to totally or partially substitute the geometric representation of the scene. This allows to obtain realistic renderings even with modest resources. The main problem is the amount of data needed, mainly due to the high redundancy and the high computational cost of capture. In this paper we present a new method to automatically determine the correct camera placement positions in order to obtain a minimal set of views for Image-Based Rendering. The input is a 3D polyhedral model including textures and the output is a set of views that sample all visible polygons at an appropriate rate. The viewpoints should cover all visible polygons with an adequate quality, so that we sample the polygons at sufficient rate. This permits to avoid the excessive redundancy of the data existing in several other approaches. We also reduce the cost of the capturing process, as the number of actually computed reference views decreases. The localization of interesting viewpoints is performed with the aid of an information theory-based measure, dubbed viewpoint entropy. This measure is used to determine the amount of information seen from a viewpoint. Next we develop a greedy algorithm to minimize the number of images needed to represent a scene. In contrast to other approaches, our system uses a special preprocess for textures to avoid artifacts appearing in partially occluded textured polygons. Therefore no visible detail of these images is lost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.