Visual Saliency aims to detect the most important regions of an image from a perceptual point of view. More in detail, the goal of Visual Saliency is to build a Saliency Map revealing the salient subset of a given image by analyzing bottom-up and top-down factors of Visual Attention. In this paper we proposed a new method for Saliency detection based on colour and scale analysis, extending our previous work based on SIFT spatial density inspection. We conducted several experiments to study the relationships between saliency methods and the object attention processes and we collected experimental data by tracking the eye movements of thirty viewers in the first three seconds of observation of several images. More precisely, we used a dataset that consists of images with an object in the foreground on an homogeneous background. We are interested in studying the performance of our saliency method with respect to the real fixation maps collected during the experiments. We compared the performances of our method with several state of the art methods with very encouraging results.
No abstract
In the first seconds of observation of an image, several visual attention processes are involved in the identification of the visual targets that pop-out from the scene to our eyes. Saliency is the quality that makes certain regions of an image stand out from the visual field and grab our attention. Saliency detection models, inspired by visual cortex mechanisms, employ both colour and luminance features. Furthermore, both locations of pixels and presence of objects influence the Visual Attention processes. In this paper, we propose a new saliency method based on the combination of the distribution of interest points in the image with multiscale analysis, a centre bias module and a machine learning approach. We use perceptually uniform colour spaces to study how colour impacts on the extraction of saliency. To investigate eye-movements and assess the performances of saliency methods over object-based images, we conduct experimental sessions on our dataset ETTO (Eye Tracking Through Objects). Experiments show our approach to be accurate in the detection of saliency concerning state-of-the-art methods and accessible eye-movement datasets. The performances over object-based images are excellent and consistent on generic pictures. Besides, our work reveals interesting findings on some relationships between saliency and perceptually uniform colour spaces.
Color vision deficiencies affect visual perception of colors and, more generally, color images. Several sciences such as genetics, biology, medicine, and computer vision are involved in studying and analyzing vision deficiencies. As we know from visual saliency findings, human visual system tends to fix some specific points and regions of the image in the first seconds of observation summing up the most important and meaningful parts of the scene. In this article, we provide some studies about human visual system behavior differences between normal and color vision-deficient visual systems. We eye-tracked the human fixations in first 3 seconds of observation of color images to build real fixation point maps. One of our contributions is to detect the main differences between the aforementioned human visual systems related to color vision deficiencies by analyzing real fixation maps among people with and without color vision deficiencies. Another contribution is to provide a method to enhance color regions of the image by using a detailed color mapping of the segmented salient regions of the given image. The segmentation is performed by using the difference between the original input image and the corresponding color blind altered image. A second eye-tracking of color blind people with the images enhanced by using recoloring of segmented salient regions reveals that the real fixation points are then more coherent (up to 10%) with the normal visual system. The eye-tracking data collected during our experiments are in a publicly available dataset called Eye-Tracking of Color Vision Deficiencies.
In this paper we propose a new effective remote sensing tool combining hardware and software solutions as an extension of our previous work. In greater detail the tool consists of a low cost receiver subsystem for public weather satellites and a signal and image processing module for several tasks such as signal and image enhancement, image reconstruction and cloud detection. Our solution allows to manage data from satellites effectively with low cost components and portable software solutions. We aim at sampling and processing of the modulated signal entirely in software enabled by Software Defined Radios (SDR) and CPU computational speed overcoming hardware limitation such as high receiver noise and low ADC resolution. Since we want to extend our previous method to demodulate signals coming from various meteorological satellites, we propose a new high frequency receiving system designed to receive and demodulate signals transmitted at 1.7 GHz. The signals coming from satellites are demodulated, synchronized and enhanced by using low level image processing techniques, then cloud detection is performed by using the well known K-means clustering algorithm. The hardware and software architecture extensions make our solution able to receive and demodulate high frequency and bandwidth meteorological satellite signals, such as those transmitted by NOAA POES, NOAA GOES, EU-METSAT Metop, Meteor-M and FengYun. Francesco Gugliuzza and Alessandro Bruno contributed equally to this work. The contribution of Alessandro Bruno falls within the activities of the current project titled "I telescopi Cherenkov per lo sviluppo tecnologico e culturale della Sicilia" at INAF-IASF Palermo, under the scientific supervision of Researcher Dr. Anna Anzalone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.