We investigated how the brain's hemispheres process explicit and implicit facial expressions in two 'split-brain' patients (one with a complete and one with a partial anterior resection). Photographs of faces expressing positive, negative or neutral emotions were shown either centrally or bilaterally. The task consisted in judging the friendliness of each person in the photographs. Half of the photograph stimuli were 'hybrid faces', that is an amalgamation of filtered images which contained emotional information only in the low range of spatial frequency, blended to a neutral expression of the same individual in the rest of the spatial frequencies. The other half of the images contained unfiltered faces. With the hybrid faces the patients and a matched control group were more influenced in their social judgements by the emotional expression of the face shown in the left visual field (LVF). When the expressions were shown explicitly, that is without filtering, the control group and the partially callosotomized patient based their judgement on the face shown in the LVF, whereas the complete split-brain patient based his ratings mainly on the face presented in the right visual field. We conclude that the processing of implicit emotions does not require the integrity of callosal fibres and can take place within subcortical routes lateralized in the right hemisphere.
Prolonged exposure to a stimulus results in a subsequent perceptual bias. This perceptual adaptation aftereffect occurs not only for simple stimulus features but also for high-level stimulus properties (e.g., faces' gender, identity and emotional expressions). Recent studies on aftereffects demonstrate that adaptation to human bodies can modulate face perception because these stimuli share common properties. Those findings suggest that the aftereffect is not related to the physical property of the stimulus but to the great number of semantic attributes shared by the adapter and the test. Here, we report a novel cross-category adaptation paradigm with both silhouette face profiles (Experiment 1.1) and frontal view faces (Experiment 2) as adapters, testing the aftereffects when viewing an androgynous test body. The results indicate that adaptation to both silhouette face profiles and frontal view faces produces gender aftereffects (e.g., after visual exposure to a female face, the androgynous body appears as more male and vice versa). These findings confirm that high-level perceptual aftereffects can occur between cross-categorical stimuli that share common properties.
This study investigated whether the visual and auditory Simon effects could be accounted for by the same mechanism. In a single experiment, we performed a detailed comparison of the visual and the auditory Simon effects arising in behavioural responses and in pupil dilation, a psychophysiological measure considered as a marker of the cognitive effort induced by conflict processing. To address our question, we performed sequential and distributional analyses on both reaction times and pupil dilation. Results confirmed that the mechanisms underlying the visual and auditory Simon effects are functionally equivalent in terms of the interaction between unconditional and conditional response processes. The two modalities, however, differ with respect to the strength of their activation and inhibition. Importantly, pupillary data mirrored the pattern observed in behavioural data for both tasks, adding physiological evidence to the current literature on the processing of visual and auditory information in a conflict task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.