There have been several reports in the literature of faster visual lexical decisions to words that are semantically ambiguous. All current models of this ambiguity advantage assume that it is the presence of multiple unrelated meanings that produce this benefit. A set of three lexical decision experiments reported here challenge this assumption. We contrast the ambiguity seen in words like bark, which have multiple unrelated meanings, with words that have multiple related word senses (e.g., twist). In all three experiments we find that while multiple word senses do produce faster responses, ambiguity between multiple meanings delays recognition. These results suggest that, while competition between the multiple meanings of ambiguous words delays their recognition, the rich semantic representations associated with words with many senses facilitate their recognition. © 2002 Elsevier Science (USA) Key Words: lexical ambiguity; polysemy; distributed semantic representations.Many words are semantically ambiguous, and can refer to more than one concept. For example, bark can refer either to a part of a tree or to the sound made by a dog. To understand such words, we must select one of these different interpretations, normally on the basis of the context in which the word occurs.Words can be ambiguous in different ways; a word like bark has two semantically unrelated meanings, which seem to share the same written and spoken form purely by chance. More common than this type of accidental ambiguity is the systematic ambiguity between related word senses. For example, the word twist has a range of dictionary definitions including to make into a coil or spiral, to operate by turning, to alter the shape of, to misconstrue the meaning of, to wrench or sprain, and to squirm or writhe. The meaning of this word varies systematically according to the context in which the word is used; for example, there are important differences between what it means to twist an ankle compared with to twist the truth. However, although the meaning of the word is ambiguous between these different interpretations, the interpretations are closely related to each other both etymologically and semantically; this is quite unlike the ambiguity for a word like bark.
A number of regions of the temporal and frontal lobes are known to be important for spoken language comprehension, yet we do not have a clear understanding of their functional role(s). In particular, there is considerable disagreement about which brain regions are involved in the semantic aspects of comprehension. Two functional magnetic resonance studies use the phenomenon of semantic ambiguity to identify regions within the fronto-temporal language network that subserve the semantic aspects of spoken language comprehension. Volunteers heard sentences containing ambiguous words (e.g. 'the shell was fired towards the tank') and well-matched low-ambiguity sentences (e.g. 'her secrets were written in her diary'). Although these sentences have similar acoustic, phonological, syntactic and prosodic properties (and were rated as being equally natural), the high-ambiguity sentences require additional processing by those brain regions involved in activating and selecting contextually appropriate word meanings. The ambiguity in these sentences goes largely unnoticed, and yet high-ambiguity sentences produced increased signal in left posterior inferior temporal cortex and inferior frontal gyri bilaterally. Given the ubiquity of semantic ambiguity, we conclude that these brain regions form an important part of the network that is involved in computing the meaning of spoken sentences.
Many word forms map onto multiple meanings (e.g., ‘‘ace”). The current experiments explore the extent to which adults reshape the lexical–semantic representations of such words on the basis of experience, to increase the availability of more recently accessed meanings. A naturalistic web-based experiment in which primes were presented within a radio programme (Experiment 1; N = 1800) and a lab-based experiment (Experiment 2) show that when listeners have encountered one or two disambiguated instances of an ambiguous word, they then retrieve this primed meaning more often (compared with an unprimed control condition). This word-meaning priming lasts up to 40 min after exposure, but decays very rapidly during this interval. Experiments 3 and 4 explore longer term word-meaning priming by measuring the impact of more extended, naturalistic encounters with ambiguous words: recreational rowers (N = 213) retrieved rowing related meanings for words (e.g., ‘‘feather”) more often if they had rowed that day, despite a median delay of 8 hours. The rate of rowing-related interpretations also increased with additional years’ rowing experience. Taken together these experiments show that individuals’ overall meaning preferences reflect experience across a wide range of timescales from minutes to years. In addition, priming was not reduced by a change in speaker identity (Experiment 1), suggesting that the phenomenon occurs at a relatively abstract lexical–semantic level. The impact of experience was reduced for older adults (Experiments 1, 3, 4) suggesting that the lexical–semantic representations of younger listeners may be more malleable to current linguistic experience
Abstract& How objects are represented and processed in the brain is a central topic in cognitive neuroscience. Previous studies have shown that knowledge of objects is represented in a featurebased distributed neural system primarily involving occipital and temporal cortical regions. Research with nonhuman primates suggest that these features are structured in a hierarchical system with posterior neurons in the inferior temporal cortex representing simple features and anterior neurons in the perirhinal cortex representing complex
We used functional MRI and the anesthetic agent propofol to assess the relationship among neural responses to speech, successful comprehension, and conscious awareness. Volunteers were scanned while listening to sentences containing ambiguous words, matched sentences without ambiguous words, and signalcorrelated noise (SCN). During three scanning sessions, participants were nonsedated (awake), lightly sedated (a slowed response to conversation), and deeply sedated (no conversational response, rousable by loud command). Bilateral temporal-lobe responses for sentences compared with signal-correlated noise were observed at all three levels of sedation, although prefrontal and premotor responses to speech were absent at the deepest level of sedation. Additional inferior frontal and posterior temporal responses to ambiguous sentences provide a neural correlate of semantic processes critical for comprehending sentences containing ambiguous words. However, this additional response was absent during light sedation, suggesting a marked impairment of sentence comprehension. A significant decline in postscan recognition memory for sentences also suggests that sedation impaired encoding of sentences into memory, with left inferior frontal and temporal lobe responses during light sedation predicting subsequent recognition memory. These findings suggest a graded degradation of cognitive function in response to sedation such that ''higher-level'' semantic and mnemonic processes can be impaired at relatively low levels of sedation, whereas perceptual processing of speech remains resilient even during deep sedation. These results have important implications for understanding the relationship between speech comprehension and awareness in the healthy brain in patients receiving sedation and in patients with disorders of consciousness.anesthesia ͉ functional MRI ͉ language ͉ memory ͉ sedation
Patients with category-specific deficits have motivated a range of hypotheses about the structure of the conceptual system. One class of models claims that apparent category dissociations emerge from the internal structure of concepts rather than fractionation of the system into separate substores. This account claims that distinctive properties of concepts in the living domain are vulnerable because of their weak correlation with other features. Given the assumption that mutual activation among correlated properties produces faster activation in the normal system, the authors predicted a disadvantage for the distinctive features of living things for unimpaired adults. Results of a speeded feature verification study supported this prediction, as did a computational simulation in which networks mapped from orthography to semantics.
A diagnosis of vegetative state is made if a patient demonstrates no evidence of awareness of self or environment, no evidence of sustained, reproducible, purposeful or voluntary behavioural response to sensory stimuli and critically no evidence of language comprehension. For those patients who retain peripheral motor function, rigorous behavioural assessment is usually able to determine retained function. However, some patients do not retain the ability to respond overtly to command and it is becoming increasingly accepted that assessment of these patients should include techniques, which do not rely on any 'motor action' on the part of the patient. Here, we apply a hierarchical functional magnetic resonance imaging (fMRI) auditory processing paradigm to determine the extent of retained language processing in a group of 14 aetiologically heterogeneous patients who met the diagnostic criteria for either the vegetative state (n = 7), the minimally conscious state (n = 5), or who were in a severely disabled condition having emerged from a minimally conscious state (n = 2). Three different levels of speech processing were assessed: (i) Low-level auditory responses were measured using a contrast between a set of auditory stimuli and a silence baseline; (ii) mid-level speech perception processing abilities were assessed by comparing intelligible speech to unintelligible noise stimuli and (iii) high-level semantic aspects of speech processing were assessed by comparing sentences that were made difficult to understand by the presence of words that were semantically ambiguous compared to matched low-ambiguity sentences. As expected the two severely disabled, but conscious patients showed preserved speech processing at all three levels. However, contrary to the diagnostic criteria defining the vegetative state, three patients (1 traumatic, 2 non-traumatic aetiology) demonstrated some evidence of preserved speech processing. The remaining four patients (1 traumatic, 3 non-traumatic aetiology) with a diagnosis of vegetative state showed no significant activation in response to sound compared with silence. These results provide further evidence that a subset of patients fulfilling the behavioural criteria for the vegetative state retain islands of preserved cognitive function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.