SummaryNeuronal cortical circuitry comprises feedforward, lateral, and feedback projections, each of which terminates in distinct cortical layers [1–3]. In sensory systems, feedforward processing transmits signals from the external world into the cortex, whereas feedback pathways signal the brain’s inference of the world [4–11]. However, the integration of feedforward, lateral, and feedback inputs within each cortical area impedes the investigation of feedback, and to date, no technique has isolated the feedback of visual scene information in distinct layers of healthy human cortex. We masked feedforward input to a region of V1 cortex and studied the remaining internal processing. Using high-resolution functional brain imaging (0.8 mm3) and multivoxel pattern information techniques, we demonstrate that during normal visual stimulation scene information peaks in mid-layers. Conversely, we found that contextual feedback information peaks in outer, superficial layers. Further, we found that shifting the position of the visual scene surrounding the mask parametrically modulates feedback in superficial layers of V1. Our results reveal the layered cortical organization of external versus internal visual processing streams during perception in healthy human subjects. We provide empirical support for theoretical feedback models such as predictive coding [10, 12] and coherent infomax [13] and reveal the potential of high-resolution fMRI to access internal processing in sub-millimeter human cortex.
Reading the non-verbal cues from faces to infer the emotional states of others is central to our daily social interactions from very early in life. Despite the relatively well-documented ontogeny of facial expression recognition in infancy, our understanding of the development of this critical social skill throughout childhood into adulthood remains limited. To this end, using a psychophysical approach we implemented the QUEST threshold-seeking algorithm to parametrically manipulate the quantity of signals available in faces normalized for contrast and luminance displaying the six emotional expressions, plus neutral. We thus determined observers' perceptual thresholds for effective discrimination of each emotional expression from 5 years of age up to adulthood. Consistent with previous studies, happiness was most easily recognized with minimum signals (35% on average), whereas fear required the maximum signals (97% on average) across groups. Overall, recognition improved with age for all expressions except happiness and fear, for which all age groups including the youngest remained within the adult range. Uniquely, our findings characterize the recognition trajectories of the six basic emotions into three distinct groupings: expressions that show a steep improvement with age - disgust, neutral, and anger; expressions that show a more gradual improvement with age - sadness, surprise; and those that remain stable from early childhood - happiness and fear, indicating that the coding for these expressions is already mature by 5 years of age. Altogether, our data provide for the first time a fine-grained mapping of the development of facial expression recognition. This approach significantly increases our understanding of the decoding of emotions across development and offers a novel tool to measure impairments for specific facial expressions in developmental clinical populations.
Human beings are remarkably skilled at recognizing faces, with the marked exception of other-race faces: the so-called "other-race effect." As reported nearly a century ago [Feingold CA (1914) Journal of Criminal Law and Police Science 5:39-51], this face-recognition impairment is accompanied by the popular belief that other-race faces all look alike. However, the neural mechanisms underlying this highlevel "perceptual illusion" are still unknown. To address this question, we recorded high-resolution electrophysiological scalp signals from East Asian (EA) and Western Caucasian (WC) observers as they viewed two EA or WC faces. The first adaptor face was followed by a target face of either the same or different identity. We quantified repetition suppression (RS), a reduction in neural activity in stimulussensitive regions following stimulus repetition. Conventional electrophysiological analyses on target faces failed to reveal any RS effect. However, to fully account for the paired nature of RS events, we subtracted the signal elicited by target to adaptor faces for each single trial and performed unbiased spatiotemporal data-driven analyses. This unique approach revealed stronger RS to same-race faces of same identity in both groups of observers on the facesensitive N170 component. Such neurophysiological modulation in RS suggests efficient identity coding for same-race faces. Strikingly, OR faces elicited identical RS regardless of identity, all looking alike to the neural population underlying the N170. Our data show that sensitivity to race begins early at the perceptual level, providing, after nearly 100 y of investigations, a neurophysiological correlate of the "all look alike" perceptual experience.adaptation | face processing | EEG | visual cognition A lmost 100 y ago, Feingold (1) reported that human beings living in different geographical locations perceive individuals belonging to "other-races" (OR) as all looking alike: "Other things being equal, individuals of a given race are distinguishable from each other in proportion to our familiarity, to our contact with the race as whole. Thus, to the uninitiated American all Asiatics look alike, while to the Asiatics, all White men look alike." This commonly experienced all look alike "perceptual illusion" for OR faces is at the root of one of the most robust empirical findings in face recognition: the other-race effect (ORE). The ORE refers to the marked behavioral impairment displayed by humans in recognizing OR compared to same-race (SR) unfamiliar faces (i.e., lower accuracy coupled with higher false identifications for OR faces). The scientific literature has provided clear evidence that the ORE and the popular belief that OR faces all look alike are not accounted for by the paucity of anthropometric variations in OR faces, but by a genuine lack of expertise. Although this theoretical explanation has been supported by numerous behavioral (for a review, see ref.2), computational (e.g., refs. 3-5) and neuroimaging (6-15) studies on the ORE, the neurophysio...
Human beings are natural experts at processing faces, with some notable exceptions. Same-race faces are better recognized than other-race faces: the so-called other-race effect (ORE). Inverting faces impairs recognition more than for any other inverted visual object: the so-called face inversion effect (FIE). Interestingly, the FIE is stronger for same- compared to other-race faces. At the electrophysiological level, inverted faces elicit consistently delayed and often larger N170 compared to upright faces. However, whether the N170 component is sensitive to race is still a matter of ongoing debate. Here we investigated the N170 sensitivity to race in the framework of the FIE. We recorded EEG from Western Caucasian and East Asian observers while presented with Western Caucasian, East Asian and African American faces in upright and inverted orientations. To control for potential confounds in the EEG signal that might be evoked by the intrinsic and salient differences in the low-level properties of faces from different races, we normalized their amplitude-spectra, luminance and contrast. No differences on the N170 were observed for upright faces. Critically, inverted same-race faces lead to greater recognition impairment and elicited larger N170 amplitudes compared to inverted other-race faces. Our results indicate a finer-grained neural tuning for same-race faces at early stages of processing in both groups of observers.
Functional magnetic resonance imaging (fMRI) has become an indispensable tool for investigating the human brain. However, the inherently poor signal-to-noise-ratio (SNR) of the fMRI measurement represents a major barrier to expanding its spatiotemporal scale as well as its utility and ultimate impact. Here we introduce a denoising technique that selectively suppresses the thermal noise contribution to the fMRI experiment. Using 7-Tesla, high-resolution human brain data, we demonstrate improvements in key metrics of functional mapping (temporal-SNR, the detection and reproducibility of stimulus-induced signal changes, and accuracy of functional maps) while leaving the amplitude of the stimulus-induced signal changes, spatial precision, and functional point-spread-function unaltered. We demonstrate that the method enables the acquisition of ultrahigh resolution (0.5 mm isotropic) functional maps but is also equally beneficial for a large variety of fMRI applications, including supra-millimeter resolution 3- and 7-Tesla data obtained over different cortical regions with different stimulation/task paradigms and acquisition strategies.
Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network.personally familiar face recognition | coarse-to-fine | fusiform face area | amygdala | medial temporal lobe
Face recognition is not rooted in a universal eye movement information-gathering strategy. Western observers favor a local facial feature sampling strategy, whereas Eastern observers prefer sampling face information from a global, central fixation strategy. Yet, the precise qualitative (the diagnostic) and quantitative (the amount) information underlying these cultural perceptual biases in face recognition remains undetermined. To this end, we monitored the eye movements of Western and Eastern observers during a face recognition task, with a novel gaze-contingent technique: the Expanding Spotlight. We used 2° Gaussian apertures centered on the observers’ fixations expanding dynamically at a rate of 1° every 25 ms at each fixation – the longer the fixation duration, the larger the aperture size. Identity-specific face information was only displayed within the Gaussian aperture; outside the aperture, an average face template was displayed to facilitate saccade planning. Thus, the Expanding Spotlight simultaneously maps out the facial information span at each fixation location. Data obtained with the Expanding Spotlight technique confirmed that Westerners extract more information from the eye region, whereas Easterners extract more information from the nose region. Interestingly, this quantitative difference was paired with a qualitative disparity. Retinal filters based on spatial-frequency decomposition built from the fixations maps revealed that Westerners used local high-spatial-frequency information sampling, covering all the features critical for effective face recognition (the eyes and the mouth). In contrast, Easterners achieved a similar result by using global low-spatial-frequency information from those facial features. Our data show that the face system flexibly engages into local or global eye movement strategies across cultures, by relying on distinct facial information span and culturally tuned spatially filtered information. Overall, our findings challenge the view of a unique putative process for face recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.