Recent studies have shown that information from peripherally presented images is present in the human foveal retinotopic cortex, presumably because of feedback signals. We investigated this potential feedback signal by presenting noise in fovea at different object-noise stimulus onset asynchronies (SOAs), whereas subjects performed a discrimination task on peripheral objects. Results revealed a selective impairment of performance when foveal noise was presented at 250-ms SOA, but only for tasks that required comparing objects' spatial details, suggesting a task-and stimulus-dependent foveal processing mechanism. Critically, the temporal window of foveal processing was shifted when mental rotation was required for the peripheral objects, indicating that the foveal retinotopic processing is not automatically engaged at a fixed time following peripheral stimulation; rather, it occurs at a stage when detailed information is required. Moreover, fMRI measurements using multivoxel pattern analysis showed that both image and object category-relevant information of peripheral objects was represented in the foveal cortex. Taken together, our results support the hypothesis of a temporally flexible feedback signal to the foveal retinotopic cortex when discriminating objects in the visual periphery.isual information processing involves feedforward and feedback interactions between different visual areas. Recent studies have identified a potential feedback signal specific to the foveal cortex (1, 2). Neuroimaging results have shown that object category information from peripherally presented images can be decoded from the foveal retinotopic cortex when subjects perform an object discrimination task (1). Further, subjects' behavioral performance is impaired when transcranial magnetic stimulation (TMS) is applied to the posterior foveal cortex at 350-400 ms after peripheral stimuli onset (2), consistent with the hypothesis of a feedback signal that directly affects behavior. Performance given a peripheral target can also be modulated psychophysically, by presenting information at the fovea (3, 4). These results support the idea that the foveal retinotopic cortex is engaged for object discrimination, even for peripherally presented objects. The current study addresses three key questions regarding the role of stimulus properties and task in modulating the foveal processing and the temporal properties of this event: Is foveal processing only engaged for high-resolution spatial tasks? Does it happen automatically, or only at the time a highlevel task requires it? Does the foveal cortex contain information about retinotopic object properties, such as image orientation, in addition to object category-relevant information?Presumably, foveal visual noise would disrupt subjects' performance in discriminating peripheral objects only when the noise and the potential feedback signal engage the foveal cortex at the same time. As predicted, we found a selective impairment of performance when a foveal noise was presented ∼250 ms following th...
Although face processing has been studied extensively, the dynamics of how face-selective cortical areas are engaged remains unclear. Here, we uncovered the timing of activation in core face-selective regions using functional Magnetic Resonance Imaging and Magnetoencephalography in humans. Processing of normal faces started in the posterior occipital areas and then proceeded to anterior regions. This bottom-up processing sequence was also observed even when internal facial features were misarranged. However, processing of two-tone Mooney faces lacking explicit prototypical facial features engaged top-down projection from the right posterior fusiform face area to right occipital face area. Further, face-specific responses elicited by contextual cues alone emerged simultaneously in the right ventral face-selective regions, suggesting parallel contextual facilitation. Together, our findings chronicle the precise timing of bottom-up, top-down, as well as context-facilitated processing sequences in the occipital-temporal face network, highlighting the importance of the top-down operations especially when faced with incomplete or ambiguous input.
Humans can accurately recognize familiar faces in only a few hundred milliseconds, but the underlying neural mechanism remains unclear. Here, we recorded intracranial electrophysiological signals from ventral temporal cortex (VTC), superior/middle temporal cortex (STC/MTC), medial parietal cortex (MPC), and amygdala/hippocampus (AMG/HPC) in 20 epilepsy patients while they viewed faces of famous people and strangers as well as common objects. In posterior VTC and MPC, familiarity-sensitive responses emerged significantly later than initial face-selective responses, suggesting that familiarity enhances face representations after they are first being extracted. Moreover, viewing famous faces increased the coupling between cortical areas and AMG/HPC in multiple frequency bands. These findings advance our understanding of the neural basis of familiar face perception by identifying the top-down modulation in local face-selective response and interactions between cortical face areas and AMG/HPC.
Humans can accurately recognize familiar faces in only a few hundred milliseconds, but the underlying neural mechanism remains unclear. Here we recorded intracranial electrophysiological signals from ventral temporal cortex (VTC), superior/middle temporal cortex (STC/MTC), medial parietal cortex (MPC) and amygdala/hippocampus (AMG/HPC) in 20 epilepsy patients while they viewed faces of famous people and strangers as well as common objects. In posterior VTC and MPC, familiarity-sensitive responses emerged significantly later than initial face-selective responses, suggesting that familiarity enhances face representations after they are first being extracted. Moreover, viewing famous faces increased the coupling between cortical areas and AMG/HPC in multiple frequency bands. These findings imply that the top-down modulation in local face-selective response and interactions between cortical face areas and AMG/HPC contribute to the superior recognition of familiar faces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.