In this paper a novel technique for saliency detection called Global Information Divergence is proposed. The technique is based on the diversity in information between two regions. Initially patches are extracted at multi-scales from the input images. This is followed by reducing the dimensionality of the extracted patches using Principal Component Analysis. After that the information divergence is evaluated between the reduced dimensionality patches, and calculated between a center and a surround region. Our technique uses a global method for defining the center patch and the surround patches collectively. The technique is tested on four competitive and complex datasets both for saliency detection and segmentation. The results obtained show a good performance in terms of quality of the saliency maps and speed compared with 16 state-of-the-art techniques.
Understanding the visual quality of a feature map plays a significant role in many active vision applications. Previous works mostly rely on object-level features, such as compactness, to estimate the quality score of a feature map. However, the compactness is leveraged on feature maps produced by salient object detection techniques where the maps tend to be compact. As a result, the compactness feature fails when the feature maps are blurry (e.g., fixation maps). In this paper, we regard the process of estimating the quality score of feature maps, specifically fixation maps, as a regression problem. After extracting several local, global, geometric, and positional characteristic features from a feature map, a model is learned using a random forest regressor to estimate the quality score of any unseen feature map. Our model is specifically tailored to estimate the quality of three types of maps: bottom-up, target, and contextual feature maps. These maps are produced for a large benchmark fixation data set of more than 900 challenging outdoor images. We demonstrate that our approach provides an accurate estimate of the quality of the abovementioned feature maps compared to the groundtruth data. In addition, we show that our proposed approach is useful in feature map integration for predicting human fixation. Instead of naively integrating all three feature maps when predicting human fixation, our proposed approach dynamically selects the best feature map with the highest estimated quality score on an individual image basis, thereby improving the fixation prediction accuracy.
<p>The human visual attention system (HVA) encompasses a set of interconnected neurological modules that are responsible for analyzing visual stimuli by attending to those regions that are salient. Two contrasting biological mechanisms exist in the HVA systems; bottom-up, data-driven attention and top-down, task-driven attention. The former is mostly responsible for low-level instinctive behaviors, while the latter is responsible for performing complex visual tasks such as target object detection. Very few computational models have been proposed to model top-down attention, mainly due to three reasons. The first is that the functionality of top-down process involves many influential factors. The second reason is that there is a diversity in top-down responses from task to task. Finally, many biological aspects of the top-down process are not well understood yet. For the above reasons, it is difficult to come up with a generalized top-down model that could be applied to all high level visual tasks. Instead, this thesis addresses some outstanding issues in modelling top-down attention for one particular task, target object detection. Target object detection is an essential step for analyzing images to further perform complex visual tasks. Target object detection has not been investigated thoroughly when modelling top-down saliency and hence, constitutes the may domain application for this thesis. The thesis will investigate methods to model top-down attention through various high-level data acquired from images. Furthermore, the thesis will investigate different strategies to dynamically combine bottom-up and top-down processes to improve the detection accuracy, as well as the computational efficiency of the existing and new visual attention models. The following techniques and approaches are proposed to address the outstanding issues in modelling top-down saliency: 1. A top-down saliency model that weights low-level attentional features through contextual knowledge of a scene. The proposed model assigns weights to features of a novel image by extracting a contextual descriptor of the image. The contextual descriptor plays the role of tuning the weighting of low-level features to maximize detection accuracy. By incorporating context into the feature weighting mechanism we improve the quality of the assigned weights to these features. 2. Two modules of target features combined with contextual weighting to improve detection accuracy of the target object. In this proposed model, two sets of attentional feature weights are learned, one through context and the other through target features. When both sources of knowledge are used to model top-down attention, a drastic increase in detection accuracy is achieved in images with complex backgrounds and a variety of target objects. 3. A top-down and bottom-up attention combination model based on feature interaction. This model provides a dynamic way for combining both processes by formulating the problem as feature selection. The feature selection exploits the interaction between these features, yielding a robust set of features that would maximize both the detection accuracy and the overall efficiency of the system. 4. A feature map quality score estimation model that is able to accurately predict the detection accuracy score of any previously novel feature map without the need of groundtruth data. The model extracts various local, global, geometrical and statistical characteristic features from a feature map. These characteristics guide a regression model to estimate the quality of a novel map. 5. A dynamic feature integration framework for combining bottom-up and top-down saliencies at runtime. If the estimation model is able to predict the quality score of any novel feature map accurately, then it is possible to perform dynamic feature map integration based on the estimated value. We propose two frameworks for feature map integration using the estimation model. The proposed integration framework achieves higher human fixation prediction accuracy with minimum number of feature maps than that achieved by combining all feature maps. The proposed works in this thesis provide new directions in modelling top-down saliency for target object detection. In addition, dynamic approaches for top-down and bottom-up combination show considerable improvements over existing approaches in both efficiency and accuracy.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.