While the detection of the interesting regions in images has been extensively studied, relatively few papers have addressed surfaces. This paper proposes an algorithm for detecting the regions of interest of surfaces. It looks for regions that are distinct both locally and globally and accounts for the distance to the foci of attention. Many applications can utilize these regions. In this paper we explore one such application-viewpoint selection. The most informative views are those that collectively provide the most descriptive presentation of the surface. We show that our results compete favorably with the state-of-the-art results.
a) Bremen city [3], colored according to height (b) The best viewpoint, colored according to our saliency Figure 1. Detecting the salient features in a point set of an urban scene. Given the noisy point set of the Bremen center (a), containing 12M points, our algorithm computes its saliency. The most salient points, such as the rosette and the crosses on the towers, are colored in yellow and red. The least salient points, belonging to the floor and the feature-less walls, are colored in blue. Our saliency map is utilized for finding the most informative viewpoint (b), displaying the most interesting buildings of the city -St. Peter's Cathedral and Bremen's town hall. In (b) we also show images of the parts that were found to be the most salient. AbstractWhile saliency in images has been extensively studied in recent years, there is very little work on saliency of point sets. This is despite the fact that point sets and range data are becoming ever more widespread and have myriad applications. In this paper we present an algorithm for detecting the salient points in unorganized 3D point sets. Our algorithm is designed to cope with extremely large sets, which may contain tens of millions of points. Such data is typical of urban scenes, which have recently become commonly available on the web. No previous work has handled such data. For general data sets, we show that our results are competitive with those of saliency detection of surfaces, although we do not have any connectivity information. We demonstrate the utility of our algorithm in two applications: producing a set of the most informative viewpoints and suggesting an informative city tour given a city scan.
Shape-based retrieval of 3D models has become an important challenge in computer graphics. Object similarity, however, is a subjective matter, dependent on the human viewer, since objects have semantics and are not mere geometric entities. Relevance feedback aims at addressing the subjectivity of similarity. This paper presents a novel relevance feedback algorithm that is based on supervised as well as unsupervised feature extraction techniques. It also proposes a novel signature for 3D models, the sphere projection. A Web search engine that realizes the signature and the relevance feedback algorithm, is presented. We show that the proposed approach produces good results and outperforms previous techniques.
a) Input model (b) User's scribbles (c) Colorized model Figure 1: Given a 3D model (a), the user scribbles on it using the desired colors (b). Our algorithm completes the colorization and generates the model shown in (c). AbstractThis paper proposes a novel algorithm for colorization of meshes. This is important for applications in which the model needs to be colored by just a handful of colors or when no relevant image exists for texturing the model. For instance, archaeologists argue that the great Roman or Greek statues were full of color in the days of their creation, and traces of the original colors can be found. In this case, our system lets the user scribble some desired colors in various regions of the mesh. Colorization is then formulated as a constrained quadratic optimization problem, which can be readily solved. Special care is taken to avoid color bleeding between regions, through the definition of a new direction field on meshes.
In this paper we introduce a novel Depth-Aware Video Saliency approach to predict human focus of attention when viewing RGBD videos on regular 2D screens. We train a generative convolutional neural network which predicts a saliency map for a frame, given the fixation map of the previous frame. Saliency estimation in this scenario is highly important since in the near future 3D video content will be easily acquired and yet hard to display. This can be explained, on the one hand, by the dramatic improvement of 3D-capable acquisition equipment. On the other hand, despite the considerable progress in 3D display technologies, most of the 3D displays are still expensive and require wearing special glasses. To evaluate the performance of our approach, we present a new comprehensive database of eye-fixation ground-truth for RGBD videos. Our experiments indicate that integrating depth into video saliency calculation is beneficial. We demonstrate that our approach outperforms state-of-the-art methods for video saliency, achieving 15% relative improvement.
While the detection of the interesting regions in images has been extensively studied, relatively few papers have addressed surfaces. This paper proposes an algorithm for detecting the regions of interest of surfaces. It looks for regions that are distinct both locally and globally and accounts for the distance to the foci of attention. Many applications can utilize these regions. In this paper we explore one such application-viewpoint selection. The most informative views are those that collectively provide the most descriptive presentation of the surface. We show that our results compete favorably with the state-of-the-art results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.