We model the visual interpolation of missing contours by extending contour fragments under a smoothness constraint. Interpolated trajectories result from an algorithm that computes the vector sum of two fields corresponding to different unification factors: the good continuation (GC) field and the minimal path (MP) field. As the distance from terminators increases, the GC field decreases and the MP field increases. Viewer-independent and viewer-dependent variables modulate GC-MP contrast (i.e., the relative strength of GC and MP maximum vector magnitudes). Viewer-independent variables include the local geometry as well as more global properties such as contour support ratio and shape regularity. Viewer-dependent variables include the retinal gap between contour endpoints and the retinal orientation of their stems. GC-MP contrast is the only free parameter of our field model. In the case of partially occluded angles, interpolated trajectories become flatter as GC-MP contrast decreases. Once GC-MP contrast is set to a specific value, derived from empirical measures on a given configuration, the model predicts all interpolation trajectories corresponding to different types of occlusion of the same angle. Model predictions fit psychophysical data on the effects of viewer-independent and viewer-dependent variables.
Perceptual judgments of relative depth from binocular disparity are systematically distorted in humans, despite in principle having access to reliable 3D information. Interestingly, these distortions vanish at a natural grasping distance, as if perceived stereo depth is contingent on a specific reference distance for depth-disparity scaling that corresponds to the length of our arm. Here we show that the brain's representation of the arm indeed powerfully modulates depth perception, and that this internal calibration can be quickly updated. We used a classic visuomotor adaptation task in which subjects execute reaching movements with the visual feedback of their reaching finger displaced farther in depth, as if they had a longer arm. After adaptation, 3D perception changed dramatically, and became accurate at the "new" natural grasping distance, the updated disparity scaling reference distance. We further tested whether the rapid adaptive changes were restricted to the visual modality or were characteristic of sensory systems in general. Remarkably, we found an improvement in tactile discrimination consistent with a magnified internal image of the arm. This suggests that the brain integrates sensory signals with information about arm length, and quickly adapts to an artificially updated body structure. These adaptive processes are most likely a relic of the mechanisms needed to optimally correct for changes in size and shape of the body during ontogenesis.
Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant’s global experience (a neutral face appeared happy and a slightly angry face neutral), while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable) reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience.
A controversial hypothesis, named the Sexualized Body Inversion Hypothesis (SBIH), claims similar visual processing of sexually objectified women (i.e., with a focus on the sexual body parts) and inanimate objects as indicated by an absence of the inversion effect for both type of stimuli. The current study aims at shedding light into the mechanisms behind the SBIH in a series of 4 experiments. Using a modified version of Bernard et al.´s (2012) visual-matching task, first we tested the core assumption of the SBIH, namely that a similar processing style occurs for sexualized human bodies and objects. In Experiments 1 and 2 a non-sexualized (personalized) condition plus two object-control conditions (mannequins, and houses) were included in the experimental design. Results showed an inversion effect for images of personalized women and mannequins, but not for sexualized women and houses. Second, we explored whether this effect was driven by differences in stimulus asymmetry, by testing the mediating and moderating role of this visual feature. In Experiment 3, we provided the first evidence that not only the sexual attributes of the images but also additional perceptual features of the stimuli, such as their asymmetry, played a moderating role in shaping the inversion effect. Lastly, we investigated the strategy adopted in the visual-matching task by tracking eye movements of the participants. Results of Experiment 4 suggest an association between a specific pattern of visual exploration of the images and the presence of the inversion effect. Findings are discussed with respect to the literature on sexual objectification.
Recent studies suggest that the active observer combines optic flow information with extra-retinal signals resulting from head motion. Such a combination allows, in principle, a correct discrimination of the presence or absence of surface rotation. In Experiments 1 and 2, observers were asked to perform such discrimination task while performing a lateral head shift. In Experiment 3, observers were shown the optic flow generated by their own movement with respect to a stationary planar slanted surface and were asked to classify perceived surface rotation as being small or large. We found that the perception of surface motion was systematically biased. We found that, in active, as well as in passive vision, perceived surface rotation was affected by the deformation component of the first-order optic flow, regardless of the actual surface rotation. We also found that the addition of a null disparity field increased the likelihood of perceiving surface rotation in active, but not in passive vision. Both these results suggest that vestibular information, provided by active vision, is not sufficient for veridical 3D shape and motion recovery from the optic flow.
Contour curvature polarity (i.e., concavity/convexity) is recognized as an important factor in shape perception. However, current interpolation models do not consider it among the factors that modulate the trajectory of amodally-completed contours. Two hypotheses generate opposite predictions about the effect of contour polarity on surface interpolation. Convexity advantage: if convexities are preferred over concavities, contours of convex portions should be more extrapolated than those of concave portions. Minimal area: if the area of amodally-completed surfaces tends to be minimized, contours of convex portions should be less extrapolated than contours of concave portions. We ran three experiments using two methods, simultaneous length comparison and probe localization, and different displays (pictures vs. random dot stereograms). Results indicate that contour polarity affects the amodally-completed angles of regular and irregular surfaces. As predicted by the minimal area hypothesis, image contours are less extrapolated when the amodal portion is convex rather than concave. The field model of interpolation [Fantoni, C., & Gerbino, W. (2003). Contour interpolation by vector-field combination. Journal of Vision, 3, 281-303. Available from http://journalofvision.org/3/4/4/] has been revised to take into account surface-level factors and to explain area minimization as an effect of surface support ratio.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.