Despite ample research, the structure and the functional characteristics of neural systems involved in human face processing are still a matter of active debate. Here we dissociated between a neural mechanism manifested by the face-sensitive N170 event-related potential effect and a mechanism manifested by induced electroencephalographic oscillations in the gamma band, which have been previously associated with the integration of individually coded features and activation of corresponding neural representations. The amplitude of the N170 was larger in the absence of the face contour but not affected by the configuration of inner components (ICs). Its latency was delayed by scrambling the configuration of the components as well as by the absence of the face contour. Unlike the N170, the amplitude of the induced gamma activity was sensitive to the configuration of ICs but insensitive to their presence within or outside a face contour. This pattern suggests a dual mechanism for early face processing, each utilizing different visual cues, which might indicate their respective roles in face processing. The N170 seems to be associated primarily with the detection and categorization of faces, whereas the gamma oscillations may be involved in the activation of their mental representation.
While brain imaging studies emphasized the category selectivity of face-related areas, the underlying mechanisms of our remarkable ability to discriminate between different faces are less understood. Here, we recorded intracranial local field potentials from face-related areas in patients presented with images of faces and objects. A highly significant exemplar tuning within the category of faces was observed in high-Gamma (80-150 Hz) responses. The robustness of this effect was supported by single-trial decoding of face exemplars using a minimal (n = 5) training set. Importantly, exemplar tuning reflected the psychophysical distance between faces but not their low-level features. Our results reveal a neuronal substrate for the establishment of perceptual distance among faces in the human brain. They further imply that face neurons are anatomically grouped according to well-defined functional principles, such as perceptual similarity.
Prior semantic knowledge facilitates episodic recognition memory for faces. To examine the neural manifestation of the interplay between semantic and episodic memory, we investigated neuroelectric dynamics during the creation (study) and the retrieval (test) of episodic memories for famous and nonfamous faces. Episodic memory effects were evident in several EEG frequency bands: theta (4-8 Hz), alpha (9-13 Hz), and gamma (40-100 Hz). Activity in these bands was differentially modulated by preexisting semantic knowledge and by episodic memory, implicating their different functional roles in memory. More specifically, theta activity and alpha suppression were larger for old compared to new faces at test regardless of fame, but were both larger for famous faces during study. This pattern of selective semantic effects suggests that the theta and alpha responses, which are primarily associated with episodic memory, reflect utilization of semantic information only when it is beneficial for task performance. In contrast, gamma activity decreased between the first (study) and second (test) presentation of a face, but overall was larger for famous than nonfamous faces. Hence, the gamma rhythm seems to be primarily related to activation of preexisting neural representations that may contribute to the formation of new episodic traces. Although the latter process is affected by the episodic status of a stimulus, gamma activity might not be a direct index of episodic memory. Taken together, these data provide new insights into the complex interaction between semantic and episodic memory for faces and the neural dynamics associated with mnemonic processes.
Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally.
Many environmental stimuli contain temporal regularities, a feature that can help predict forthcoming input. Phase locking (entrainment) of ongoing low-frequency neuronal oscillations to rhythmic stimuli is proposed as a potential mechanism for enhancing neuronal responses and perceptual sensitivity, by aligning high-excitability phases to events within a stimulus stream. Previous experiments show that rhythmic structure has a behavioral benefit even when the rhythm itself is below perceptual detection thresholds (ten Oever et al., 2014). It is not known whether this "inaudible" rhythmic sound stream also induces entrainment. Here we tested this hypothesis using magnetoencephalography and electrocorticography in humans to record changes in neuronal activity as subthreshold rhythmic stimuli gradually became audible. We found that significant phase locking to the rhythmic sounds preceded participants' detection of them. Moreover, no significant auditory-evoked responses accompanied this prethreshold entrainment. These auditory-evoked responses, distinguished by robust, broad-band increases in intertrial coherence, only appeared after sounds were reported as audible. Taken together with the reduced perceptual thresholds observed for rhythmic sequences, these findings support the proposition that entrainment of low-frequency oscillations serves a mechanistic role in enhancing perceptual sensitivity for temporally predictive sounds. This framework has broad implications for understanding the neural mechanisms involved in generating temporal predictions and their relevance for perception, attention, and awareness. The environment is full of rhythmically structured signals that the nervous system can exploit for information processing. Thus, it is important to understand how the brain processes such temporally structured, regular features of external stimuli. Here we report the alignment of slowly fluctuating oscillatory brain activity to external rhythmic structure before its behavioral detection. These results indicate that phase alignment is a general mechanism of the brain to process rhythmic structure and can occur without the perceptual detection of this temporal structure.
EEG studies suggested that the N170 ERP and Gamma-band responses to faces reflect early and later stages of a multiple-level face-perception mechanism, respectively. However, these conclusions should be considered cautiously because EEG-recorded Gamma may be contaminated by non-cephalic activity such as microsaccades. Moreover, EEG studies of Gamma cannot easily reveal its intracranial sources. Here we recorded MEG rather than EEG, assessed the sources of the M170 and Gamma oscillations using beamformer, and explored the sensitivity of these neural manifestations to global, featural and configural information in faces. The M170 was larger in response to faces and face components than in response to watches. Scrambling the configuration of the inner components of the face even if presented without the face contour reduced and delayed the M170. The amplitude of MEG Gamma oscillations (30–70 Hz) was higher than baseline during an epoch between 230–570 ms from stimulus onset and was particularly sensitive to the configuration of the stimuli, regardless of their category. However, in the lower part of this frequency range (30–40 Hz) only physiognomic stimuli elevated the MEG above baseline. Both the M170 and Gamma were generated in a posterior-ventral network including the fusiform, inferior-occipital and lingual gyri, all in the right hemisphere. The generation of Gamma involved additional sources in the visual system, bilaterally. We suggest that the evoked M170 manifests a face-perception mechanism based on the global characteristics of face, whereas the induced Gamma oscillations are associated with the integration of visual input into a pre-existent coherent perceptual representation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.