We investigated human perceptual performance allowed by relatively impoverished information conveyed in nighttime natural scenes. We used images of nighttime outdoor scenes rendered in image-intensified low-light visible (i2) sensors, thermal infrared (ir) sensors, and an i2/ir fusion technique with information added. We found that nighttime imagery provides adequate low-level image information for effective perceptual organization on a classification task, but that performance for exemplars within a given object category is dependent on the image type. Overall performance was best with the false-color fused images. This is consistent with the suggestion in the literature that color plays a predominate role in perceptual grouping and segmenting of objects in a scene and supports the suggestion that the addition of color in complex achromatic scenes aids the perceptual organization required for visual search. In the present study, we address the issue of assessment of perceptual performance with alternative night-vision sensors and fusion methods and begin to characterize perceptual organization abilities permitted by the information in relatively impoverished images of complex scenes. Applications of this research include improving night vision, medical, and other devices that use alternative sensors or degraded imagery.
Inferior performance for obliquely oriented stimuli is often observed on higher level visual and somatosensory tasks and also on tests of low-level visual sensory ability. This study demonstrated an anisotropy of low-level somatosensory performance. Sensitivity to gratings on the finger pad was highest for gratings oriented proximally-distally, intermediate for oblique gratings, and lowest for medial-lateral gratings. This pattern supports a model proposing that detection threshold is determined by the number of neurons tuned to a stimulus (A. Anzai, A. Hearse, Jr., R. D. Freeman, & D. Cai, 1995). A classification of somatosensory and visual anisotropies is proposed in which orientation biases are classified as being attributable to either anisotropic sensory filtering (Class 1) or anisotropic higher level processing (Class 2). It was concluded that a given instance of anisotropic visual or somatosensory performance may stem from low-level sensory factors, high-level factors, or a mixture of the two, depending on the task demands.
The goal of this study was to determine the perceptual advantages of multiband sensor-fused (achromatic and chromatic) imagery over conventional single-band nighttime (image-intensified and infrared) imagery for a wide range of visual tasks, including detection, orientation, and scene recognition. Participants were 151 active-duty military observers whose reaction time and accuracy scores were recorded during a visual search task. Data indicate that sensor fusion did not improve performance relative to that obtained with single-band imagery on a target detection task but did facilitate object recognition, judgments of spatial orientation, and scene recognition. Observers' recognition and orientation judgments were improved by the emergent information within the image-fused imagery (i.e., combining dominant information from two or more sensors into a single displayed image). Actual or potential applications of this research include the deployment of image-sensor fused systems for automobile, aviation, and maritime displays to increase operators' visual processing during low-light conditions.
The ability of humans to detect striated stimuli on the distal phalanges was found to be highly anisotropic. Observers were much more sensitive to stripes presented in the proximal-distal orientation than to stripes in any other orientation. This tactile anisotropy was contrasted with the well-known visual anisotropy in which sensitivity is greatest for stripes at the horizontal and vertical orientations. We suggest that both the tactile anisotropy and the visual anisotropy are caused by corresponding anisotropies in the distribution of preferred orientations of orientation-selective neurons with in the respective modalities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.