Neuropsychological studies prompted the theory that the primate visual system might be organized into two parallel pathways, one for conscious perception and one for guiding action. Supporting evidence in healthy subjects seemed to come from a dissociation in visual illusions: In previous studies, the Ebbinghaus (or Titchener) illusion deceived perceptual judgments of size, but only marginally influenced the size estimates used in grasping. Contrary to those results, the findings from the present study show that there is no difference in the sizes of the perceptual and grasp illusions if the perceptual and grasping tasks are appropriately matched. We show that the differences found previously can be accounted for by a hitherto unknown, nonadditive effect in the illusion. We conclude that the illusion does not provide evidence for the existence of two distinct pathways for perception and action in the visual system.
Although remarkably robust, face recognition is not perfectly invariant to pose and viewpoint changes. It has long been known that both profile and full-face views result in poorer recognition performance than a 3/4 view. However, little data exist which investigate this phenomenon in detail. The present work provides such data using a high angular resolution and a large range of poses. Since there are inconsistencies in the literature concerning these issues, we emphasize the different roles of the learning view and the testing view in the recognition experiment. We also emphasize the roles of information contained in the texture and in the shape of a face. Our stimuli were generated from laser-scanned head models and contained either the natural texture or only Lambertian shading and no texture. The results of our same/different face recognition experiments are: (1) only the learning view but not the testing view affects recognition performance. (2) For textured faces the optimal learning view is closer to the full-face view than for the shaded faces. (3) For shaded faces, we find a significantly better recognition performance for the symmetric view. The results can be interpreted in terms of different strategies to recover invariants from texture and from shading.
Self-motion through an environment involves a composite of signals such as visual and vestibular cues. Building upon previous results showing that visual and vestibular signals combine in a statistically optimal fashion, we investigated the relative weights of visual and vestibular cues during self-motion. This experiment was comprised of three experimental conditions: vestibular alone, visual alone (with four different standard heading values), and visual-vestibular combined. In the combined cue condition, inter-sensory conflicts were introduced (Δ = ±6° or ±10°). Participants performed a 2-interval forced choice task in all conditions and were asked to judge in which of the two intervals they moved more to the right. The cue-conflict condition revealed the relative weights associated with each modality. We found that even when there was a relatively large conflict between the visual and vestibular cues, participants exhibited a statistically optimal reduction in variance. On the other hand, we found that the pattern of results in the unimodal conditions did not predict the weights in the combined cue condition. Specifically, visual-vestibular cue combination was not predicted solely by the reliability of each cue, but rather more weight was given to the vestibular cue.
Is human object recognition viewpoint dependent or viewpoint i n v ariant under everyday" conditions? Biederman and Gerhardstein 1993 argue that viewpoint-invariant mechanisms are used almost exclusively. H o w ever, our analysis indicates that: 1 their conditions for immediate viewpoint i n v ariance lack the generality t o c haracterize a wide range of recognition phenomena; 2 the extensive body of viewpoint-dependent results cannot be dismissed as processing by-products" or experimental artifacts"; 3 geon structural descriptions cannot coherently account for category recognition, the domain they are intended to explain. We conclude that the weight of current evidence supports an exemplar-based multiple-views mechanism as an important component of both exemplar-speci c and categorical recognition.
When light strikes a translucent material (such as wax, milk or fruit flesh), it enters the body of the object, scatters and reemerges from the surface. The diffusion of light through translucent materials gives them a characteristic visual softness and glow. What image properties underlie this distinctive appearance? What cues allow us to tell whether a surface is translucent or opaque? Previous work on the perception of semitransparent materials was based on a very restricted physical model of thin filters [Metelli 1970; 1974a,b]. However, recent advances in computer graphics [Jensen et al. 2001; Jensen and Buhler 2002] allow us to efficiently simulate the complex subsurface light transport effects that occur in real translucent objects. Here we use this model to study the perception of translucency, using a combination of psychophysics and image statistics. We find that many of the cues that were traditionally thought to be important for semitransparent filters (e.g., X-junctions) are not relevant for solid translucent objects. We discuss the role of highlights, color, object size, contrast, blur, and lighting direction in the perception of translucency. We argue that the physics of translucency are too complex for the visual system to estimate intrinsic physical parameters by inverse optics. Instead, we suggest that we identify translucent materials by parsing them into key regions and by gathering image statistics from these regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.