Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application.
A development essential for understanding the neural basis of complex behavior and cognition is the description, during the last quarter of the twentieth century, of detailed patterns of neuronal circuitry in the mammalian cerebral cortex. This effort established that sensory pathways exhibit successive levels of convergence, from the early sensory cortices to sensory-specific association cortices and to multisensory association cortices, culminating in maximally integrative regions; and that this convergence is reciprocated by successive levels of divergence, from the maximally integrative areas all the way back to the early sensory cortices. This article first provides a brief historical review of these neuroanatomical findings, which were relevant to the study of brain and mind-behavior relationships using a variety of approaches and to the proposal of heuristic anatomo-functional frameworks. In a second part, the article reviews new evidence that has accumulated from studies of functional neuroimaging, employing both univariate and multivariate analyses, as well as electrophysiology, in humans and other mammals, that the integration of information across the auditory, visual, and somatosensory-motor modalities proceeds in a content-rich manner. Behaviorally and cognitively relevant information is extracted from and conserved across the different modalities, both in higher-order association cortices and in early sensory cortices. Such stimulus-specific information is plausibly relayed along the neuroanatomical pathways alluded to above. The evidence reviewed here suggests the need for further in-depth exploration of the intricate connectivity of the mammalian cerebral cortex in experimental neuroanatomical studies.
Drawing from a common lexicon of semantic units, humans fashion narratives whose meaning transcends that of their individual utterances. However, while brain regions that represent lower-level semantic units, such as words and sentences, have been identified, questions remain about the neural representation of narrative comprehension, which involves inferring cumulative meaning. To address these questions, we exposed English, Mandarin and Farsi native speakers to native language translations of the same stories during fMRI scanning. Using a new technique in natural language processing, we calculated the distributed representations of these stories (capturing the meaning of the stories in high-dimensional semantic space), and demonstrate that using these representations we can identify the specific story a participant was reading from the neural data. Notably, this was possible even when the distributed representations were calculated using stories in a different language than the participant was reading.Relying on over 44 billion classifications, our results reveal that identification relied on a collection of brain regions most prominently located in the default mode network.These results demonstrate that neuro-semantic encoding of narratives happens at levels higher than individual semantic units and that this encoding is systematic across both individuals and languages. NEURO-SEMANTIC REPRESENTATION OF STORIES 3 Decoding the Neural Representation of Story Meanings across LanguagesOne of the defining characteristics of human language is its capacity for semantic extensibility. Drawing from a common lexicon of morphemes and words, humans generate and comprehend sophisticated, higher-level utterances that transcend the sum of their individual units. This is perhaps best exemplified in stories, in which sequences of events invite inferences about the intentions and motivations of characters, about cause and effect, and about theme and message. The kind of meaning that emerges over time as one listens to a story is not easily captured by analysis at the word level alone.Further, a necessary condition for generating higher-level semantic constructs is that speakers of the same language infer similar meanings from expressions of both lower and higher level semantic units. For example, it can be assumed that when speakers of the same language listen to stories, the perceived meanings of these stories have much in NEURO-SEMANTIC REPRESENTATION OF STORIES 4In this work, our aim is to move beyond word-level semantics to investigate neuro-semantic representations at the story-level across three different languages.Specifically, we set out to determine if there are systematic patterns in the neuro-semantic representations of stories beyond those corresponding to word-level stimuli. Our aim is motivated by the long-standing understanding that discourse representations are different from the sum of all of their lexical or clausal parts. Most psycholinguistic models of discourse processing are concerned with the con...
People can identify objects in the environment with remarkable accuracy, irrespective of the sensory modality they use to perceive them. This suggests that information from different sensory channels converges somewhere in the brain to form modality-invariant representations, i.e., representations that reflect an object independently of the modality through which it has been apprehended. In this functional magnetic resonance imaging study of human subjects, we first identified brain areas that responded to both visual and auditory stimuli and then used crossmodal multivariate pattern analysis to evaluate the neural representations in these regions for content-specificity (i.e., do different objects evoke different representations?) and modality-invariance (i.e., do the sight and the sound of the same object evoke a similar representation?). While several areas became activated in response to both auditory and visual stimulation, only the neural patterns recorded in a region around the posterior part of the superior temporal sulcus displayed both content-specificity and modality-invariance. This region thus appears to play an important role in our ability to recognize objects in our surroundings through multiple sensory channels and to process them at a supra-modal (i.e., conceptual) level.
We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts.
Drawing from a common lexicon of semantic units, humans fashion narratives whose meaning transcends that of their individual utterances. However, while brain regions that represent lower-level semantic units, such as words and sentences, have been identified, questions remain about the neural representation of narrative comprehension, which involves inferring cumulative meaning. To address these questions, we exposed English, Mandarin and Farsi native speakers to native language translations of the same stories during fMRI scanning. Using a new technique in natural language processing, we calculated the distributed representations of these stories (capturing the meaning of the stories in high-dimensional semantic space), and demonstrate that using these representations we can identify the specific story a participant was reading from the neural data. Notably, this was possible even when the distributed representations were calculated using stories in a different language than the participant was reading.Relying on over 44 billion classifications, our results reveal that identification relied on a collection of brain regions most prominently located in the default mode network.These results demonstrate that neuro-semantic encoding of narratives happens at levels higher than individual semantic units and that this encoding is systematic across both individuals and languages. NEURO-SEMANTIC REPRESENTATION OF STORIES 3 Decoding the Neural Representation of Story Meanings across LanguagesOne of the defining characteristics of human language is its capacity for semantic extensibility. Drawing from a common lexicon of morphemes and words, humans generate and comprehend sophisticated, higher-level utterances that transcend the sum of their individual units. This is perhaps best exemplified in stories, in which sequences of events invite inferences about the intentions and motivations of characters, about cause and effect, and about theme and message. The kind of meaning that emerges over time as one listens to a story is not easily captured by analysis at the word level alone.Further, a necessary condition for generating higher-level semantic constructs is that speakers of the same language infer similar meanings from expressions of both lower and higher level semantic units. For example, it can be assumed that when speakers of the same language listen to stories, the perceived meanings of these stories have much in NEURO-SEMANTIC REPRESENTATION OF STORIES 4In this work, our aim is to move beyond word-level semantics to investigate neuro-semantic representations at the story-level across three different languages.Specifically, we set out to determine if there are systematic patterns in the neuro-semantic representations of stories beyond those corresponding to word-level stimuli. Our aim is motivated by the long-standing understanding that discourse representations are different from the sum of all of their lexical or clausal parts. Most psycholinguistic models of discourse processing are concerned with the con...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.