Symmetry is effortlessly perceived by humans across changes in viewing geometry. Here, we re‐examined the network subserving symmetry processing in the context of up‐to‐date retinotopic definitions of visual areas. Responses in object selective cortex, as defined by functional localizers, were also examined. We further examined responses to both frontoparallel and slanted symmetry while manipulating attention both toward and away from symmetry. Symmetry‐specific responses first emerge in V3 and continue across all downstream areas examined. Of the retinotopic areas, ventral occipital VO1 showed the strongest symmetry response, which was similar in magnitude to the responses observed in object selective cortex. Neural responses were found to increase with both the coherence and folds of symmetry. Compared to passive viewing, drawing attention to symmetry generally increased neural responses and the correspondence of these neural responses with psychophysical performance. Examining symmetry on the slanted plane found responses to again emerge in V3, continue through downstream visual cortex, and be strongest in VO1 and LOB. Both slanted and frontoparallel symmetry evoked similar activity when participants performed a symmetry‐related task. However, when a symmetry‐unrelated task was performed, fMRI responses to slanted symmetry were reduced relative to their frontoparallel counterparts. These task‐related changes provide a neural signature that suggests slant has to be computed ahead of symmetry being appropriately extracted, known as the “normalization” account of symmetry processing. Specifically, our results suggest that normalization occurs naturally when attention is directed toward symmetry and orientation, but becomes interrupted when attention is directed away from these features.
The role of binocular vision in grasping has frequently been assessed by measuring the effects on grasp kinematics of covering one eye. These studies have typically used three or fewer objects presented at three or fewer distances, raising the possibility that participants learn the properties of the stimulus set. If so, even relatively poor visual information may be sufficient to identify which object/distance configuration is presented on a given trial, in effect providing an additional source of depth information. Here we show that the availability of this uncontrolled cue leads to an underestimate of the effects of removing binocular information, and therefore to an overestimate of the effectiveness of the remaining cues. We measured the effects of removing binocular cues on visually open-loop grasps using (1) a conventional small stimulus-set, and (2) a large, pseudo-randomised stimulus set, which could not be learned. Removing binocular cues resulted in a significant change in grip aperture scaling in both conditions: peak grip apertures were larger (when reaching to small objects), and scaled less with increases in object size. However, this effect was significantly larger with the randomised stimulus set. These results confirm that binocular information makes a significant contribution to grasp planning. Moreover, they suggest that learned stimulus information can contribute to grasping in typical experiments, and so the contribution of information from binocular vision (and from other depth cues) may not have been measured accurately.
We present a database of high-definition (HD) videos for the study of traits inferred from whole-body actions. Twenty-nine actors (19 female) were filmed performing different actions—walking, picking up a box, putting down a box, jumping, sitting down, and standing and acting—while conveying different traits, including four emotions (anger, fear, happiness, sadness), untrustworthiness, and neutral, where no specific trait was conveyed. For the actions conveying the four emotions and untrustworthiness, the actions were filmed multiple times, with the actor conveying the traits with different levels of intensity. In total, we made 2,783 action videos (in both two-dimensional and three-dimensional format), each lasting 7 s with a frame rate of 50 fps. All videos were filmed in a green-screen studio in order to isolate the action information from all contextual detail and to provide a flexible stimulus set for future use. In order to validate the traits conveyed by each action, we asked participants to rate each of the actions corresponding to the trait that the actor portrayed in the two-dimensional videos. To provide a useful database of stimuli of multiple actions conveying multiple traits, each video name contains information on the gender of the actor, the action executed, the trait conveyed, and the rating of its perceived intensity. All videos can be downloaded free at the following address: http://www-users.york.ac.uk/~neb506/databases.html. We discuss potential uses for the database in the analysis of the perception of whole-body actions.Electronic supplementary materialThe online version of this article (doi:10.3758/s13428-013-0439-6) contains supplementary material, which is available to authorized users.
Adaptation to facial characteristics, such as gender and viewpoint, has been shown to both bias our perception of faces and improve facial discrimination. In this study, we examined whether adapting to two levels of face trustworthiness improved sensitivity around the adapted level. Facial trustworthiness was manipulated by morphing between trustworthy and untrustworthy prototypes, each generated by morphing eight trustworthy and eight untrustworthy faces, respectively. In the first experiment, just-noticeable differences (JNDs) were calculated for an untrustworthy face after participants adapted to an untrustworthy face, a trustworthy face, or did not adapt. In the second experiment, the three conditions were identical, except that JNDs were calculated for a trustworthy face. In the third experiment we examined whether adapting to an untrustworthy male face improved discrimination to an untrustworthy female face. In all experiments, participants completed a two-interval forced-choice (2-IFC) adaptive staircase procedure, in which they judged which face was more untrustworthy. JNDs were derived from a psychometric function fitted to the data. Adaptation improved sensitivity to faces conveying the same level of trustworthiness when compared to no adaptation. When adapting to and discriminating around a different level of face trustworthiness there was no improvement in sensitivity and JNDs were equivalent to those in the no adaptation condition. The improvement in sensitivity was found to occur even when adapting to a face with different gender and identity. These results suggest that adaptation to facial trustworthiness can selectively enhance mechanisms underlying the coding of facial trustworthiness to improve perceptual sensitivity. These findings have implications for the role of our visual experience in the decisions we make about the trustworthiness of other individuals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.