words constitute nearly half of the human lexicon and are critically associated with human abstract thoughts, yet little is known about how they are represented in the brain. We tested the neural basis of 2 classical cognitive notions of abstract meaning representation: by linguistic contexts and by semantic features. We collected fMRI BOLD responses for 360 abstract words and built theoretical representational models from state-of-the-art corpus-based natural language processing models and behavioral ratings of semantic features. Representational similarity analyses revealed that both linguistic contextual and semantic feature similarity affected the representation of abstract concepts, but in distinct neural levels. The corpus-based similarity was coded in the high-level linguistic processing system, whereas semantic feature information was reflected in distributed brain regions and in the principal component space derived from whole-brain activation patterns. These findings highlight the multidimensional organization and the neural dissociation between linguistic contextual and featural aspects of abstract concepts.
Humans process the meaning of the world via both verbal and nonverbal modalities. It has been established that widely distributed cortical regions are involved in semantic processing, yet the global wiring pattern of this brain system has not been considered in the current neurocognitive semantic models. We review evidence from the brain-network perspective, which shows that the semantic system is topologically segregated into three brain modules. Revisiting previous region-based evidence in light of these new network findings, we postulate that these three modules support multimodal experiential representation, language-supported representation, and semantic control. A tri-network neurocognitive model of semantic processing is proposed, which generates new hypotheses regarding the network basis of different types of semantic processes.
Concepts can be related in many ways. They can belong to the same taxonomic category (e.g., "doctor" and "teacher," both in the category of people) or be associated with the same event context (e.g., "doctor" and "stethoscope," both associated with medical scenarios). How are these two major types of semantic relations coded in the brain? We constructed stimuli from three taxonomic categories (people, manmade objects, and locations) and three thematic categories (school, medicine, and sports) and investigated the neural representations of these two dimensions using representational similarity analyses in human participants (10 men and nine women). In specific regions of interest, the left anterior temporal lobe (ATL) and the left temporoparietal junction (TPJ), we found that, whereas both areas had significant effects of taxonomic information, the taxonomic relations had stronger effects in the ATL than in the TPJ ("doctor" and "teacher" closer in ATL neural activity), with the reverse being true for thematic relations ("doctor" and "stethoscope" closer in TPJ neural activity). A whole-brain searchlight analysis revealed that widely distributed regions, mainly in the left hemisphere, represented the taxonomic dimension. Interestingly, the significant effects of the thematic relations were only observed after the taxonomic differences were controlled for in the left TPJ, the right superior lateral occipital cortex, and other frontal, temporal, and parietal regions. In summary, taxonomic grouping is a primary organizational dimension across distributed brain regions, with thematic grouping further embedded within such taxonomic structures. How are concepts organized in the brain? It is well established that concepts belonging to the same taxonomic categories (e.g., "doctor" and "teacher") share neural representations in specific brain regions. How concepts are associated in other manners (e.g., "doctor" and "stethoscope," which are thematically related) remains poorly understood. We used representational similarity analyses to unravel the neural representations of these different types of semantic relations by testing the same set of words that could be differently grouped by taxonomic categories or by thematic categories. We found that widely distributed brain areas primarily represented taxonomic categories, with the thematic categories further embedded within the taxonomic structure.
Access to semantic information of visual word forms is a key component of reading comprehension. In this study, we examined the involvement of the visual word form area (VWFA) in this process by investigating whether and how the activity patterns of the VWFA are influenced by semantic information during semantic tasks. We asked participants to perform two semantic tasks - taxonomic or thematic categorization - on visual words while obtaining the blood-oxygen-level-dependent (BOLD) fMRI responses to each word. Representational similarity analysis with four types of semantic relations (taxonomic, thematic, subjective semantic rating and word2vec) revealed that neural activity patterns of the VWFA were associated with taxonomic information only in the taxonomic task, with thematic information only in the thematic task and with the composite semantic information measured by word2vec in both semantic tasks. Furthermore, the semantic information in the VWFA cannot be explained by confounding factors including orthographic, low-level visual and phonological information. These findings provide positive evidence for the presence of both orthographic and task-relevant semantic information in the VWFA and have significant implications for the neurobiological basis of reading.
Neuroimaging studies have consistently indicated that semantic processing involves a brain network consisting of multimodal cortical regions distributed in the frontal, parietal, and temporal lobes. However, little is known about how semantic information is organized and processed within the network. Some recent studies have indicated that sensory-motor semantic information modulates the activation of this network. Other studies have indicated that this network responds more to social semantic information than to other information. Using fMRI, we collectively investigated the brain activations evoked by social and sensory-motor semantic information by manipulating the sociality and imageability of verbs in a word comprehension task. We detected 2 subgroups of brain regions within the network showing sociality and imageability effects, respectively. The 2 subgroups of regions are distinct but overlap in bilateral angular gyri and adjacent middle temporal gyri. A follow-up analysis of resting-state functional connectivity showed that dissociation of the 2 subgroups of regions is partially associated with their intrinsic functional connectivity differences. Additionally, an interaction effect of sociality and imageability was observed in the left anterior temporal lobe. Our findings indicate that the multimodal cortical semantic network has fine subdivisions that process and integrate social and sensory-motor semantic information.
The representation of object categories is a classical question in cognitive neuroscience and compelling evidence has identified specific brain regions showing preferential activation to categories of evolutionary significance. However, the potential contributions to category processing by tuning the connectivity patterns are largely unknown. Adopting a continuous multicategory paradigm, we obtained whole-brain functional connectivity (FC) patterns of each of four categories (faces, scenes, animals and tools) in healthy human adults and applied multivariate connectivity pattern classification analyses. We found that the whole-brain FC patterns made high-accuracy predictions of which category was being viewed. The decoding was successful even after the contributions of regions showing classical category-selective activations were excluded. We further identified the discriminative network for each category, which span way beyond the classical category-selective regions. Together, these results reveal novel mechanisms about how categorical information is represented in large-scale FC patterns, with general implications for the interactive nature of distributed brain areas underlying high-level cognition. Hum Brain Mapp 37:3685-3697, 2016. © 2016 Wiley Periodicals, Inc.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.