Here we used magnetoencephalography (MEG) to investigate stages of processing in face perception in humans. We found a face-selective MEG response occurring only 100 ms after stimulus onset (the 'M100'), 70 ms earlier than previously reported. Further, the amplitude of this M100 response was correlated with successful categorization of stimuli as faces, but not with successful recognition of individual faces, whereas the previously-described face-selective 'M170' response was correlated with both processes. These data suggest that face processing proceeds through two stages: an initial stage of face categorization, and a later stage at which the identity of the individual face is extracted.
We propose that self-control failures, and variation across individuals in self-control abilities, are partly due to differences in the speed with which the decision-making circuitry processes basic attributes like taste, versus more abstract attributes such as health. We test these hypotheses by combining a dietary choice task with a novel form of mouse tracking that allows us to pinpoint when different attributes are being integrated into the choice process with millisecond temporal resolution. We find that, on average, taste attributes are processed about 195 ms earlier than health attributes during the choice process. We also find that 13 - 39% of observed individual differences in self-control ability can be explained by differences in the relative speed with which taste and health attributes are processed.
Abstract■ fMRI studies have reported three regions in human ventral visual cortex that respond selectively to faces: the occipital face area (OFA), the fusiform face area (FFA), and a face-selective region in the superior temporal sulcus (fSTS). Here, we asked whether these areas respond to two first-order aspects of the face argued to be important for face perception, face parts (eyes, nose, and mouth), and the T-shaped spatial configuration of these parts. Specifically, we measured the magnitude of response in these areas to stimuli that (i) either contained real face parts, or did not, and (ii) either had veridical face configurations, or did not. The OFA and the fSTS were sensitive only to the presence of real face parts, not to the correct configuration of those parts, whereas the FFA was sensitive to both face parts and face configuration. Further, only in the FFA was the response to configuration and part information correlated across voxels, suggesting that the FFA contains a unified representation that includes both kinds of information. In combination with prior results from fMRI, TMS, MEG, and patient studies, our data illuminate the functional division of labor in the OFA, FFA, and fSTS. ■
The parahippocampal place area (PPA) has been demonstrated to respond more strongly in fMRI to scenes depicting places than to other kinds of visual stimuli. Here, we test several hypotheses about the function of the PPA. We find that PPA activity (1) is not affected by the subjects' familiarity with the place depicted, (2) does not increase when subjects experience a sense of motion through the scene, and (3) is greater when viewing novel versus repeated scenes but not novel versus repeated faces. Thus, we find no evidence that the PPA is involved in matching perceptual information to stored representations in memory, in planning routes, or in monitoring locomotion through the local or distal environment but some evidence that it is involved in encoding new perceptual information about the appearance and layout of scenes.
To test whether the human fusiform face area (FFA) responds not only to faces but to anything human or animate, we used fMRI to measure the response of the FFA to six new stimulus categories. The strongest responses were to stimuli containing faces: human faces (2.0% signal increase from fixation baseline) and human heads (1.7%), with weaker but still strong responses to whole humans (1.5%) and animal heads (1.3%). Responses to whole animals (1.0%) and human bodies without heads (1.0%) were significantly stronger than responses to inanimate objects (0.7%), but responses to animal bodies without heads (0.8%) were not. These results demonstrate that the FFA is selective for faces, not for animals.
Optimal decision-making often requires exercising self-control. A growing fMRI literature has implicated the dorsolateral prefrontal cortex (dlPFC) in successful self-control, but due to the limitations inherent in BOLD measures of brain activity, the neurocomputational role of this region has not been resolved. Here we exploit the high temporal resolution and whole-brain coverage of event-related potentials (ERPs) to test the hypothesis that dlPFC affects dietary self-control through two different mechanisms: attentional filtering and value modulation. Whereas attentional filtering of sensory input should occur early in the decision process, value modulation should occur later on, after the computation of stimulus values begins. Hungry human subjects were asked to make food choices while we measured neural activity using ERP in a natural condition, in which they responded freely and did not exhibit a tendency to regulate their diet, and in a self-control condition, in which they were given a financial incentive to lose weight. We then measured various neural markers associated with the attentional filtering and value modulation mechanisms across the decision period to test for changes in neural activity during the exercise of self-control. Consistent with the hypothesis, we found evidence for top-down attentional filtering early on in the decision period (150 -200 ms poststimulus onset) as well as evidence for value modulation later in the process (450 -650 ms poststimulus onset). We also found evidence that dlPFC plays a role in the deployment of both mechanisms.
Although face perception is often characterized as depending on holistic, rather than part-based, processing, there is behavioral evidence for independent representations of face parts. Recent work has linked ''faceselective'' regions defined with functional magnetic resonance imaging (fMRI) to holistic processing, but the response of these areas to face parts remains unclear. Here we examine part-based versus holistic processing in ''face-selective'' visual areas using face stimuli manipulated in binocular disparity to appear either behind or in front of a set of stripes [Nakayama, K., Shimojo, S., & Silverman, G. H. Stereoscopic depth: Its relation to image segmentation, grouping, and the recognition of occluded objects. Perception, 18, 55-68, 1989]. While the first case will be ''filled in'' by the visual system and perceived holistically, we demonstrate behaviorally that the latter cannot be completed amodally, and thus is perceived as parts. Using these stimuli in fMRI, we found significant responses to both depth manipulations in inferior occipital gyrus and middle fusiform gyrus (MFG) ''faceselective'' regions, suggesting that neural populations in these areas encode both parts and wholes. In comparison, applying these depth manipulations to control stimuli (alphanumeric characters) elicited much smaller signal changes within faceselective regions, indicating that the part-based representation for faces is separate from that for objects. The combined adaptation data also showed an interaction of depth and familiarity within the right MFG, with greater adaptation in the back (holistic) condition relative to parts for familiar but not unfamiliar faces. Together, these data indicate that face-selective regions of occipitotemporal cortex engage in both part-based and holistic processing. The relative recruitment of such representations may be additionally influenced by external factors such as familiarity. Disciplines Medicine and Health Sciences
Adaptation paradigms are becoming increasingly popular for characterizing visual areas in neuroimaging, but the relation of these results to perception is unclear. Neurophysiological studies have generally reported effects of stimulus repetition starting at 250-300 ms after stimulus onset, well beyond the latencies of components associated with perception (100-200 ms). Here we demonstrate adaptation for earlier evoked components when 2 stimuli (S1 and S2) are presented in close succession. Using magnetoencephalography, we examined the M170, a "face-selective" response at 170 ms after stimulus onset that shows a larger response to faces than to other stimuli. Adaptation of the M170 occurred only when stimuli were presented with relatively short stimulus onset asynchronies (< 800 ms) and was larger for faces preceded by faces than by houses. This face-selective adaptation is not merely low-level habituation to physical stimulus attributes, as photographic, line-drawing, and 2-tone face images produced similar levels of adaptation. Nor does it depend on the amplitude of the S1 response: adaptation remained greater for faces than houses even when the amplitude of the S1 face response was reduced by visual noise. These results indicate that rapid adaptation of early, short-latency responses not only exists but also can be category selective.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.