The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a "content-poor" view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be "content rich" and capable of directly participating in goal-driven visual information representation in the brain.
The primate visual system contains two major cortical pathways: a ventral-temporal pathway that has been associated with object processing and recognition, and a dorsal-parietal pathway that has been associated with spatial processing and action guidance. Our understanding of the role of the dorsal pathway, in particular, has greatly evolved within the framework of the two-pathway hypothesis since its original conception. Here, we present a comparative review of the primate dorsal pathway in humans and monkeys based on electrophysiological, neuroimaging, neuropsychological, and neuroanatomical studies. We consider similarities and differences across species in terms of the topographic representation of visual space; specificity for eye, reaching, or grasping movements; multi-modal response properties; and the representation of objects and tools. We also review the relative anatomical location of functionally- and topographically-defined regions of the posterior parietal cortex. An emerging theme from this comparative analysis is that non-spatial information is represented to a greater degree, and with increased complexity, in the human dorsal visual system. We propose that non-spatial information in the primate parietal cortex contributes to the perception-to-action system aimed at manipulating objects in peripersonal space. In humans, this network has expanded in multiple ways, including the development of a dorsal object vision system mirroring the complexity of the ventral stream, the integration of object information with parietal working memory systems, and the emergence of tool-specific object representations in the anterior intraparietal sulcus and regions of the inferior parietal lobe. We propose that these evolutionary changes have enabled the emergence of human-specific behaviors, such as the sophisticated use of tools.
A host of recent studies have reported robust representations of visual object information in the human parietal cortex, similar to those found in ventral visual cortex. In ventral visual cortex, both monkey neurophysiology and human fMRI studies showed that the neural representation of a pair of unrelated objects can be approximated by the averaged neural representation of the constituent objects shown in isolation. In this study, we examined whether such a linear relationship between objects exists for object representations in the human parietal cortex. Using fMRI and multivoxel pattern analysis, we examined object representations in human inferior and superior intraparietal sulcus, two parietal regions previously implicated in visual object selection and encoding, respectively. We also examined responses from the lateral occipital region, a ventral object processing area. We obtained fMRI response patterns to object pairs and their constituent objects shown in isolation while participants viewed these objects and performed a 1-back repetition detection task. By measuring fMRI response pattern correlations, we found that all three brain regions contained representations for both single object and object pairs. In the lateral occipital region, the representation for a pair of objects could be reliably approximated by the average representation of its constituent objects shown in isolation, replicating previous findings in ventral visual cortex. Such a simple linear relationship, however, was not observed in either parietal region examined. Nevertheless, when we equated the amount of task information present by examining responses from two pairs of objects, we found that representations for the average of two object pairs were indistinguishable in both parietal regions from the average of another two object pairs containing the same four component objects but with a different pairing of the objects (i.e., the average of AB and CD vs. that of AD and CB). Thus, when task information was held consistent, the same linear relationship may govern how multiple independent objects are represented in the human parietal cortex as it does in ventral visual cortex. These findings show that object and task representations coexist in the human parietal cortex and characterize one significant difference of how visual information may be represented in ventral visual and parietal regions.
In many everyday activities, we need to attend and encode multiple target objects among distractor objects. For example, when driving a car on a busy street, we need to simultaneously attend objects such as traffic signs, pedestrians, and other cars, while ignoring colorful and flashing objects in display windows. To explain how multiple visual objects are selected and encoded in visual short-term memory (VSTM) and in perception in general, the neural object file theory argues that whereas object selection and individuation is supported by inferior intra-parietal sulcus (IPS), the encoding of detailed object features that enables object identification is mediated by superior IPS and higher visual areas such as the lateral occipital complex (LOC). Nevertheless, because task-irrelevant distractor objects were never present in previous studies, it is unclear how distractor objects would impact neural responses related to target object individuation and identification. To address this question, in two fMRI experiments, we asked participants to encode target object shapes among distractor object shapes, with targets and distractors shown in different spatial locations and in different colors. We found that distractor-related neural processing only occurred at low, but not at high, target encoding load and impacted both target individuation in inferior IPS and target identification in superior IPS and LOC. However, such distractor-related neural processing was short-lived as it was only present during the VSTM encoding but not the delay period. Moreover, with spatial cuing of target locations in advance, distractor processing was attenuated during target encoding in superior IPS. These results are consistent with the load-theory of visual information processing. They also show that while inferior IPS and LOC were automatically engaged in distractor processing under low task load, with the help of precuing, superior IPS was able to only encode the task-relevant visual information.
The human parietal cortex exhibits a preference to contralaterally presented visual stimuli (i.e., laterality) as well as an asymmetry between the two hemispheres with the left parietal cortex showing greater laterality than the right. Using visual short-term memory and perceptual tasks and varying target location predictability, this study examined whether hemispheric laterality and asymmetry are fixed characteristics of the human parietal cortex or whether they are dynamic and modulated by the deployment of top-down attention to the target present hemifield. Two parietal regions were examined here that have previously been shown to be involved in visual object individuation and identification and are located in the inferior and superior intraparietal sulcus (IPS), respectively. Across three experiments, significant laterality was found in both parietal regions regardless of attentional modulation with laterality being greater in the inferior than superior IPS, consistent with their roles in object individuation and identification, respectively. Although the deployment of top-down attention had no effect on the superior IPS, it significantly increased laterality in the inferior IPS. The deployment of top-down spatial attention can thus amplify the strength of laterality in the inferior IPS. Hemispheric asymmetry, on the other hand, was absent in both brain regions and only emerged in the inferior but not the superior IPS with the deployment of top-down attention. Interestingly, the strength of hemispheric asymmetry significantly correlated with the strength of laterality in the inferior IPS. Hemispheric asymmetry thus seems to only emerge when there is a sufficient amount of laterality present in a brain region.
Recent studies on the probability cueing effect have shown that a spatial bias emerges toward a location where a target frequently appears. In the present study, we explored whether such spatial bias can be flexibly shifted when the target-frequent location changes depending on the given context. In four consecutive experiments, participants performed a visual search task within two distinct contexts that predicted the visual quadrant that was more likely to contain a target. We found that spatial attention was equally biased toward two target-frequent quadrants, regardless of context (context-independent spatial bias), when the context information was not mandatory for accurate visual search. Conversely, when the context became critical for the visual search task, the spatial bias shifted significantly more to the target-frequent quadrant predicted by the given context (context-specific spatial bias). These results show that the task relevance of context determines whether probabilistic knowledge can be learned flexibly in a context-specific manner.
The present work examined discrimination accuracy for targets that were presented either alone in the visual field (clean displays) or embedded within a dense array of letter distractors (crowded displays). The strength of visual crowding varied strongly across the four quadrants of the visual field. Furthermore, this spatial bias in crowding was strongly influenced by the observers’ prior experience with specific distractor stimuli. Observers who were monolingual readers of English experienced amplified crowding in the upper-left quadrant, while subjects with primary reading skills in Korean, Chinese, or Japanese tended towards worse target discrimination in the lower visual field. This interaction with language experience was eliminated when non-alphanumeric stimuli were employed as distractors, suggesting that prior reading experience induced a stimulus-specific change in the topography of visual crowding from English letters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.