Semantic memory refers to knowledge about people, objects, actions, relations, self, and culture acquired through experience. The neural systems that store and retrieve this information have been studied for many years, but a consensus regarding their identity has not been reached. Using strict inclusion criteria, we analyzed 120 functional neuroimaging studies focusing on semantic processing. Reliable areas of activation in these studies were identified using the activation likelihood estimate (ALE) technique. These activations formed a distinct, left-lateralized network comprised of 7 regions: posterior inferior parietal lobe, middle temporal gyrus, fusiform and parahippocampal gyri, dorsomedial prefrontal cortex, inferior frontal gyrus, ventromedial prefrontal cortex, and posterior cingulate gyrus. Secondary analyses showed specific subregions of this network associated with knowledge of actions, manipulable artifacts, abstract concepts, and concrete concepts. The cortical regions involved in semantic processing can be grouped into 3 broad categories: posterior multimodal and heteromodal association cortex, heteromodal prefrontal cortex, and medial limbic regions. The expansion of these regions in the human relative to the nonhuman primate brain may explain uniquely human capacities to use language productively, plan, solve problems, and create cultural and technological artifacts, all of which depend on the fluid and efficient retrieval and manipulation of semantic knowledge.
Componential theories of lexical semantics assume that concepts can be represented by sets of features or attributes that are in some sense primitive or basic components of meaning. The binary features used in classical category and prototype theories are problematic in that these features are themselves complex concepts, leaving open the question of what constitutes a primitive feature. The present availability of brain imaging tools has enhanced interest in how concepts are represented in brains, and accumulating evidence supports the claim that these representations are at least partly "embodied" in the perception, action, and other modal neural systems through which concepts are experienced. In this study we explore the possibility of devising a componential model of semantic representation based entirely on such functional divisions in the human brain. We propose a basic set of approximately 65 experiential attributes based on neurobiological considerations, comprising sensory, motor, spatial, temporal, affective, social, and cognitive experiences. We provide normative data on the salience of each attribute for a large set of English nouns, verbs, and adjectives, and show how these attribute vectors distinguish a priori conceptual categories and capture semantic similarity. Robust quantitative differences between concrete object categories were observed across a large number of attribute dimensions. A within- versus between-category similarity metric showed much greater separation between categories than representations derived from distributional (latent semantic) analysis of text. Cluster analyses were used to explore the similarity structure in the data independent of a priori labels, revealing several novel category distinctions. We discuss how such a representation might deal with various longstanding problems in semantic theory, such as feature selection and weighting, representation of abstract concepts, effects of context on semantic retrieval, and conceptual combination. In contrast to componential models based on verbal features, the proposed representation systematically relates semantic content to large-scale brain networks and biologically plausible accounts of concept acquisition.
The pronunciation of irregular words in deep orthographies like English cannot be specified by simple rules. On the other hand, the fact that novel letter strings can be pronounced seems to imply the existence of such rules. These facts motivate dual-route models of word naming, which postulate separate lexical (whole-word) and non-lexical (rulebased) mechanisms for accessing phonology. We used fMRI during oral naming of irregular words, regular words, and nonwords, to test this theory against a competing single-mechanism account known as the triangle model, which proposes that all words are handled by a single system containing distributed orthographic, phonological, and semantic codes rather than word codes. Two versions of the dual-route model were distinguished: an dexclusiveT version in which activation of one processing route predominates over the other, and a dparallelT version in which both routes are equally activated by all words. The fMRI results provide no support for the exclusive dual-route model. Several frontal, insular, anterior cingulate, and parietal regions showed responses that increased with naming difficulty (nonword > irregular word > regular word) and were correlated with response time, but there was no activation consistent with the predicted response of a nonlexical, rule-based mechanism (i.e., nonword > regular word > irregular word). Several regions, including the angular gyrus and dorsal prefrontal cortex bilaterally, left ventromedial temporal lobe, and posterior cingulate gyrus, were activated more by words than nonwords, but these dlexical routeT regions were equally active for irregular and regular words. The results are compatible with both the parallel dual-route model and the triangle model. dLexical routeT regions also showed effects of word imageability. Together with previous imaging studies using semantic task contrasts, the imageability effects are consistent with semantic processing in these brain regions, suggesting that word naming is partly semantically-mediated. D 2005 Elsevier Inc. All rights reserved. Keywords: Word naming; Dual-route model; Triangle model IntroductionThe correspondence between spoken and written forms of a language is not always systematic. While in some alphabetic orthographies the sound of a word can be worked out using rules of pronunciation, in most, there are varying degrees of irregularity in the mapping between print and sound. In English, for example, Bernard Shaw pointed out that the word ''fish'' could be written ghoti if one were mischievous enough to borrow the spelling for /f/ from rough, the spelling of /I/ from women, and the spelling of /sh/ from nation. Words like colonel and yacht are only some of the more extreme examples of such irregularity of pronunciation, which is pervasive in English and is seen in many of its more common words, including some, many, of, the, and word just used in this sentence.While the pronunciation of these dirregularT words would seem to be learned through rote memorization of the whole word, there is...
Recent research indicates that sensory and motor cortical areas play a significant role in the neural representation of concepts. However, little is known about the overall architecture of this representational system, including the role played by higher level areas that integrate different types of sensory and motor information. The present study addressed this issue by investigating the simultaneous contributions of multiple sensory-motor modalities to semantic word processing. With a multivariate fMRI design, we examined activation associated with 5 sensory-motor attributes--color, shape, visual motion, sound, and manipulation--for 900 words. Regions responsive to each attribute were identified using independent ratings of the attributes' relevance to the meaning of each word. The results indicate that these aspects of conceptual knowledge are encoded in multimodal and higher level unimodal areas involved in processing the corresponding types of information during perception and action, in agreement with embodied theories of semantics. They also reveal a hierarchical system of abstracted sensory-motor representations incorporating a major division between object interaction and object perception processes.
The role of sensory-motor systems in conceptual understanding has been controversial. It has been proposed than many abstract concepts are understood metaphorically through concrete sensory-motor domains such as actions. Using fMRI, we compared neural responses to literal action (Lit; The daughter grasped the flowers), metaphoric action (Met; The public grasped the idea), and abstract (Abs; The public understood the idea) sentences of varying familiarity. Both Lit and Met sentences activated the left anterior inferior partial lobule (aIPL), an area involved in action planning, with Met sentences also activating a homologous area in the right hemisphere, relative to Abs sentences. Both Met and Abs sentences activated left superior temporal regions associated with abstract language. Importantly, activation in primary motor and biological motion perception regions was inversely correlated with Lit and Met familiarity. These results support the view that the understanding of metaphoric action retains a link to sensory-motor systems involved in action performance. However, the involvement of sensory-motor systems in metaphor understanding changes through a gradual abstraction process whereby relatively detailed simulations are used for understanding unfamiliar metaphors, and these simulations become less detailed and involve only secondary motor regions as familiarity increases. Consistent with these data, we propose that aIPL serves as an interface between sensory-motor and conceptual systems and plays an important role in both domains. The similarity of abstract and metaphoric sentences in the activation of left superior temporal regions suggests that action metaphor understanding is not completely based on sensory-motor simulations, but relies also on abstract lexical-semantic codes.
Language consists of sequences of words, but comprehending phrases involves more than concatenating meanings: A boat house is a shelter for boats, whereas a summer house is a house used during summer, and a ghost house is typically uninhabited. Little is known about the brain bases of combinatorial semantic processes. We performed two fMRI experiments using familiar, highly meaningful phrases (LAKE HOUSE) and unfamiliar phrases with minimal meaning created by reversing the word order of the familiar items (HOUSE LAKE). The first experiment used a 1-back matching task to assess implicit semantic processing, and the second used a classification task to engage explicit semantic processing. These conditions required processing of the same words, but with more effective combinatorial processing in the meaningful condition. The contrast of meaningful versus reversed phrases revealed activation primarily during the classification task, to a greater extent in the right hemisphere, including right angular gyrus, dorsomedial prefrontal cortex, and bilateral posterior cingulate/precuneus, areas previously implicated in semantic processing. Positive correlations of fMRI signal with lexical (word-level) frequency occurred exclusively with the 1-back task and to a greater spatial extent on the left, including left posterior middle temporal gyrus and bilateral parahippocampus. These results reveal strong effects of task demands on engagement of lexical versus combinatorial processing and suggest a hemispheric dissociation between these levels of semantic representation.
The sensory-motor account of conceptual processing suggests that modality-specific attributes play a central role in the organization of object and action knowledge in the brain. An opposing view emphasizes the abstract, amodal, and symbolic character of concepts, which are thought to be represented outside the brain's sensory-motor systems. We conducted a functional magnetic resonance imaging study in which the participants listened to sentences describing hand/arm action events, visual events, or abstract behaviors. In comparison to visual and abstract sentences, areas associated with planning and control of hand movements, motion perception, and vision were activated when understanding sentences describing actions. Sensory-motor areas were activated to a greater extent also for sentences with actions that relied mostly on hands, as opposed to arms. Visual sentences activated a small area in the secondary visual cortex, whereas abstract sentences activated superior temporal and inferior frontal regions. The results support the view that linguistic understanding of actions partly involves imagery or simulation of actions, and relies on some of the same neural substrate used for planning, performing, and perceiving actions.
Similar to functional magnetic resonance imaging (fMRI), functional near-infrared spectroscopy (fNIRS) detects the changes of hemoglobin species inside the brain, but via differences in optical absorption. Within the near-infrared spectrum, light can penetrate biological tissues and be absorbed by chromophores, such as oxyhemoglobin and deoxyhemoglobin. What makes fNIRS more advantageous is its portability and potential for long-term monitoring. This paper reviews the basic mechanisms of fNIRS and its current clinical applications, the limitations toward more widespread clinical usage of fNIRS, and current efforts to improve the temporal and spatial resolution of fNIRS toward robust clinical usage within subjects. Oligochannel fNIRS is adequate for estimating global cerebral function and it has become an important tool in the critical care setting for evaluating cerebral oxygenation and autoregulation in patients with stroke and traumatic brain injury. When it comes to a more sophisticated utilization, spatial and temporal resolution becomes critical. Multichannel NIRS has improved the spatial resolution of fNIRS for brain mapping in certain task modalities, such as language mapping. However, averaging and group analysis are currently required, limiting its clinical use for monitoring and real-time event detection in individual subjects. Advances in signal processing have moved fNIRS toward individual clinical use for detecting certain types of seizures, assessing autonomic function and cortical spreading depression. However, its lack of accuracy and precision has been the major obstacle toward more sophisticated clinical use of fNIRS. The use of high-density whole head optode arrays, precise sensor locations relative to the head, anatomical co-registration, short-distance channels, and multi-dimensional signal processing can be combined to improve the sensitivity of fNIRS and increase its use as a widespread clinical tool for the robust assessment of brain function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.