The question of how the human brain represents conceptual knowledge has been debated in many scientific fields. Brain imaging studies have shown that different spatial patterns of neural activation are associated with thinking about different semantic categories of pictures and words (for example, tools, buildings, and animals). We present a computational model that predicts the functional magnetic resonance imaging (fMRI) neural activation associated with words for which fMRI data are not yet available. This model is trained with a combination of data from a trillion-word text corpus and observed fMRI data associated with viewing several dozen concrete nouns. Once trained, the model predicts fMRI activation for thousands of other concrete nouns in the text corpus, with highly significant accuracies over the 60 nouns for which we currently have fMRI data.
A number of studies have investigated differences in neural correlates of abstract and concrete concepts with disagreement across results. A quantitative, coordinate-based meta-analysis combined data from 303 participants across 19 functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies to identify the differences in neural representation of abstract and concrete concepts. Studies that reported peak activations in standard space in contrast of abstract > concrete or concrete > abstract concepts at a whole brain level in healthy adults were included in this meta-analysis. Multilevel kernel density analysis (MKDA) was performed to identify the proportion of activated contrasts weighted by sample size and analysis type (fixed or random effects). Meta-analysis results indicated consistent and meaningful differences in neural representation for abstract and concrete concepts. Abstract concepts elicit greater activity in the inferior frontal gyrus and middle temporal gyrus compared to concrete concepts, while concrete concepts elicit greater activity in the posterior cingulate, precuneus, fusiform gyrus, and parahippocampal gyrus compared to abstract concepts. These results suggest greater engagement of the verbal system for processing of abstract concepts and greater engagement of the perceptual system for processing of concrete concepts, likely via mental imagery.
Previous studies have succeeded in identifying the cognitive state corresponding to the perception of a set of depicted categories, such as tools, by analyzing the accompanying pattern of brain activity, measured with fMRI. The current research focused on identifying the cognitive state associated with a 4s viewing of an individual line drawing (1 of 10 familiar objects, 5 tools and 5 dwellings, such as a hammer or a castle). Here we demonstrate the ability to reliably (1) identify which of the 10 drawings a participant was viewing, based on that participant's characteristic whole-brain neural activation patterns, excluding visual areas; (2) identify the category of the object with even higher accuracy, based on that participant's activation; and (3) identify, for the first time, both individual objects and the category of the object the participant was viewing, based only on other participants' activation patterns. The voxels important for category identification were located similarly across participants, and distributed throughout the cortex, focused in ventral temporal perceptual areas but also including more frontal association areas (and somewhat left-lateralized). These findings indicate the presence of stable, distributed, communal, and identifiable neural states corresponding to object concepts.
In this work we explore whether the patterns of brain activity associated with thinking about concrete objects are dependent on stimulus presentation format, whether an object is referred to by a written or pictorial form. Multi-voxel pattern analysis methods were applied to brain imaging (fMRI) data to identify the item category associated with brief viewings of each of 10 words (naming 5 tools and 5 dwellings) and, separately, with brief viewings of each of 10 pictures (line drawings) of the objects named by the words. These methods were able to identify the category of the picture the participant was viewing, based on neural activation patterns observed during word-viewing, and identify the category of the word the participant was viewing, based on neural activation patterns observed during picture-viewing, using data from only that participant or only from other participants. These results provide an empirical demonstration of object category identification across stimulus formats and across participants. In addition, we were able to identify the category of the word that the participant was viewing based on the patterns of neural activation generated during word-viewing by that participant or by all other participants. Similarly, we were able to identify with even higher accuracy the category of the picture the participant was viewing, based on the patterns of neural activation demonstrated during picture-viewing by that participant or by all other participants. The brain locations that were important for category identification were similar across participants and were distributed throughout the cortex where various object properties might be neurally represented. These findings indicate consistent triggering of semantic representations using different stimulus formats and suggest the presence of stable, distributed, and identifiable neural states that are common to pictorial and verbal input referring to object categories.
In human vision, acuity and color sensitivity are greatest at the center of fixation and fall off rapidly as visual eccentricity increases. Humans exploit the high resolution of central vision by actively moving their eyes three to four times each second. Here we demonstrate that it is possible to classify the task that a person is engaged in from their eye movements using multivariate pattern classification. The results have important theoretical implications for computational and neural models of eye movement control. They also have important practical implications for using passively recorded eye movements to infer the cognitive state of a viewer, information that can be used as input for intelligent human-computer interfaces and related applications.
The default mode network (DMN) is a collection of brain areas found to be consistently deactivated during task performance. Previous neuroimaging studies of resting state have revealed reduced task-related deactivation of this network in autism. We investigated the DMN in 13 high-functioning adults with autism spectrum disorders (ASD) and 14 typically developing control participants during three fMRI studies (two language tasks and a Theory-of-Mind (ToM) task). Each study had separate blocks of fixation/resting baseline. The data from the task blocks and fixation blocks were collated to examine deactivation and functional connectivity. Deficits in the deactivation of the DMN in individuals with ASD were specific only to the ToM task, with no group differences in deactivation during the language tasks or a combined language and self-other discrimination task. During rest blocks following the ToM task, the ASD group showed less deactivation than the control group in a number of DMN regions, including medial prefrontal cortex (MPFC), anterior cingulate cortex, and posterior cingulate gyrus/precuneus. In addition, we found weaker functional connectivity of the MPFC in individuals with ASD compared to controls. Furthermore, we were able to reliably classify participants into ASD or typically developing control groups based on both the whole-brain and seed-based connectivity patterns with accuracy up to 96.3%. These findings indicate that deactivation and connectivity of the DMN were altered in individuals with ASD. In addition, these findings suggest that the deficits in DMN connectivity could be a neural signature that can be used for classifying an individual as belonging to the ASD group.
The goal of the study was to identify the neural representation of a noun's meaning in one language based on the neural representation of that same noun in another language. Machine learning methods were used to train classifiers to identify which individual noun bilingual participants were thinking about in one language based solely on their brain activation in the other language. The study shows reliable (p < .05) pattern-based classification accuracies for the classification of brain activity for nouns across languages. It also shows that the stable voxels used to classify the brain activation were located in areas associated with encoding information about semantic dimensions of the words in the study. The identification of the semantic trace of individual nouns from the pattern of cortical activity demonstrates the existence of a multi-voxel pattern of activation across the cortex for a single noun common to both languages in bilinguals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.