Mathematically, three-dimensional space can be represented differently by the cartesian, polar, and other coordinate systems. However, in physical sciences, the choice of representation system is restricted by the need to simplify a machine's computation while enhancing its efficiency. Does the brain, for the same reasons, 'select' the most cost-efficient way to represent the three-dimensional location of objects? As we frequently interact with objects on the common ground surface, it might be beneficial for the visual system to code an object's location using a ground-surface-based reference frame. More precisely, the brain could use a quasi-two-dimensional coordinate system (x(s), y(s)) with respect to the ground surface (s), rather than a strictly three-dimensional coordinate system (x, y, z), thus reducing coding redundancy and simplifying computations. Here we provide support for this view by studying human psychophysical performance in perceiving absolute distance and in visually directed action tasks. For example, when an object was seen on a continuous, homogeneous texture ground surface, the observer judged the distance to the object accurately. However, when similar surface information was unavailable, for example, when the object was seen across a gap in the ground, or across distinct texture regions, distance judgement was impaired.
People with normal eyesight typically see horizontal and vertical gratings better than oblique gratings (Psychological Bulletin 78 (1972) 266; Perception 9 (1980) 37). In the present study we investigated whether this oblique effect anisotropy is still observed when viewing more complex visual stimuli that better correspond to the content encountered in everyday viewing of the world. We show that the ability to see oriented structure in an image consisting of broadband spatial content is indeed anisotropic, but that the pattern of this orientation bias is completely different from that obtained with simpler stimuli. Horizontal stimuli are seen worst and oblique stimuli are seen best when tested with more realistic broadband stimuli. We suggest that this "horizontal effect" would be useful in an evolutionary capacity by serving to discount the horizon and other oriented content that tends to dominate natural scenes and thereby increase the salience of objects contained in typical outdoor scenes.
We investigated human perceptual performance allowed by relatively impoverished information conveyed in nighttime natural scenes. We used images of nighttime outdoor scenes rendered in image-intensified low-light visible (i2) sensors, thermal infrared (ir) sensors, and an i2/ir fusion technique with information added. We found that nighttime imagery provides adequate low-level image information for effective perceptual organization on a classification task, but that performance for exemplars within a given object category is dependent on the image type. Overall performance was best with the false-color fused images. This is consistent with the suggestion in the literature that color plays a predominate role in perceptual grouping and segmenting of objects in a scene and supports the suggestion that the addition of color in complex achromatic scenes aids the perceptual organization required for visual search. In the present study, we address the issue of assessment of perceptual performance with alternative night-vision sensors and fusion methods and begin to characterize perceptual organization abilities permitted by the information in relatively impoverished images of complex scenes. Applications of this research include improving night vision, medical, and other devices that use alternative sensors or degraded imagery.
The goal of this study was to determine the perceptual advantages of multiband sensor-fused (achromatic and chromatic) imagery over conventional single-band nighttime (image-intensified and infrared) imagery for a wide range of visual tasks, including detection, orientation, and scene recognition. Participants were 151 active-duty military observers whose reaction time and accuracy scores were recorded during a visual search task. Data indicate that sensor fusion did not improve performance relative to that obtained with single-band imagery on a target detection task but did facilitate object recognition, judgments of spatial orientation, and scene recognition. Observers' recognition and orientation judgments were improved by the emergent information within the image-fused imagery (i.e., combining dominant information from two or more sensors into a single displayed image). Actual or potential applications of this research include the deployment of image-sensor fused systems for automobile, aviation, and maritime displays to increase operators' visual processing during low-light conditions.
To evaluate a new Fourier-based analysis method for diagnosing glaucoma using retinal nerve fiber layer (RNFL) thickness estimates obtained from the optical coherence tomograph (OCT) (OCT 2000) and the scanning laser polarimeter (GDx). Methods: We obtained RNFL thickness estimates from 1 eye of 38 healthy individuals and 42 patients with early glaucomatous visual field loss using the OCT and GDx devices. The shape of the RNFL double-hump pattern was assessed using Fourier analysis, and values were entered into a linear discriminant analysis. Receiver operating characteristic (ROC) curves were used to compare the performance of the Fourier-based metrics against other commonly used RNFL analytical procedures. Reliability was assessed on independent samples by the splithalf method. Correlations were calculated to determine the extent to which the Fourier discriminant measures and other RNFL measures covaried between the 2 devices and the relationship between these RNFL measures and visual field measures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.