Within the connectionist triangle model of reading aloud, interaction between semantic and phonological representations occurs for all words but is particularly important for correct pronunciation of lower frequency exception words. This framework therefore predicts that (a) semantic dementia, which compromises semantic knowledge, should be accompanied by surface dyslexia, a frequency-modulated deficit in exception word reading, and (b) there should be a significant relationship between the severity of semantic degradation and the severity of surface dyslexia. The authors evaluated these claims with reference to 100 observations of reading data from 51 cases of semantic dementia. Surface dyslexia was rampant, and a simple composite semantic measure accounted for half of the variance in low-frequency exception word reading. Although in 3 cases initial testing revealed a moderate semantic impairment but normal exception word reading, all of these became surface dyslexic as their semantic knowledge deteriorated further. The connectionist account attributes such cases to premorbid individual variation in semantic reliance for accurate exception word reading. These results provide a striking demonstration of the association between semantic dementia and surface dyslexia, a phenomenon that the authors have dubbed SD-squared.
Butler et al. relate behavioural deficits in 31 patients with chronic stroke aphasia to underlying neural structures. Using principal components analysis, they reduce a neuropsychological battery to three independent dimensions: phonological, semantic and executive-cognition. Phonological and semantic processing are linked to dorsal and ventral pathway integrity, respectively
Individual differences in the performance profiles of neuropsychologically-impaired patients are pervasive yet there is still no resolution on the best way to model and account for the variation in their behavioural impairments and the associated neural correlates. To date, researchers have generally taken one of three different approaches: a single-case study methodology in which each case is considered separately; a case-series design in which all individual patients from a small coherent group are examined and directly compared; or, group studies, in which a sample of cases are investigated as one group with the assumption that they are drawn from a homogenous category and that performance differences are of no interest. In recent research, we have developed a complementary alternative through the use of principal component analysis (PCA) of individual data from large patient cohorts. This data-driven approach not only generates a single unified model for the group as a whole (expressed in terms of the emergent principal components) but is also able to capture the individual differences between patients (in terms of their relative positions along the principal behavioural axes). We demonstrate the use of this approach by considering speech fluency, phonology and semantics in aphasia diagnosis and classification, as well as their unique neural correlates. PCA of the behavioural data from 31 patients with chronic post-stroke aphasia resulted in four statistically-independent behavioural components reflecting phonological, semantic, executive–cognitive and fluency abilities. Even after accounting for lesion volume, entering the four behavioural components simultaneously into a voxel-based correlational methodology (VBCM) analysis revealed that speech fluency (speech quanta) was uniquely correlated with left motor cortex and underlying white matter (including the anterior section of the arcuate fasciculus and the frontal aslant tract), phonological skills with regions in the superior temporal gyrus and pars opercularis, and semantics with the anterior temporal stem.
On the basis of a theory about the role of semantic knowledge in the recognition and production of familiar words and objects, we predicted that patients with semantic dementia would reveal a specific pattern of impairment on six different tasks typically considered “pre-” or “non-” semantic: reading aloud, writing to dictation, inflecting verbs, lexical decision, object decision, and delayed copy drawing. The prediction was that all tasks would reveal a frequency-by-typicality interaction, with patients performing especially poorly on lower-frequency items with atypical structure (e.g., words with an atypical spelling-to-sound relationship; objects with an atypical feature for their class, such as the hump on a camel, etc). Of 84 critical observations (14 patients performing 6 tasks), this prediction was correct in 84/84 cases; and a single component in a factor analysis accounted for 87% of the variance across seven measures: each patient's degree of impairment on atypical items in the six experimental tasks and a separate composite score reflecting his or her degree of semantic impairment. Errors also consistently conformed to the predicted pattern for both expressive and receptive tasks, with responses reflecting residual knowledge about the typical surface structure of each domain. We argue that these results cannot be explained as associated but unrelated deficits but instead are a principled consequence of a primary semantic impairment.
Semantic ambiguity has often been divided into 2 forms: homonymy, referring to words with 2 unrelated interpretations (e.g., bark), and polysemy, referring to words associated with a number of varying but semantically linked uses (e.g., twist). Typically, polysemous words are thought of as having a fixed number of discrete definitions, or “senses,” with each use of the word corresponding to one of its senses. In this study, we investigated an alternative conception of polysemy, based on the idea that polysemous variation in meaning is a continuous, graded phenomenon that occurs as a function of contextual variation in word usage. We quantified this contextual variation using semantic diversity (SemD), a corpus-based measure of the degree to which a particular word is used in a diverse set of linguistic contexts. In line with other approaches to polysemy, we found a reaction time (RT) advantage for high SemD words in lexical decision, which occurred for words of both high and low imageability. When participants made semantic relatedness decisions to word pairs, however, responses were slower to high SemD pairs, irrespective of whether these were related or unrelated. Again, this result emerged irrespective of the imageability of the word. The latter result diverges from previous findings using homonyms, in which ambiguity effects have only been found for related word pairs. We argue that participants were slower to respond to high SemD words because their high contextual variability resulted in noisy, underspecified semantic representations that were more difficult to compare with one another. We demonstrated this principle in a connectionist computational model that was trained to activate distributed semantic representations from orthographic inputs. Greater variability in the orthography-to-semantic mappings of high SemD words resulted in a lower degree of similarity for related pairs of this type. At the same time, the representations of high SemD unrelated pairs were less distinct from one another. In addition, the model demonstrated more rapid semantic activation for high SemD words, thought to underpin the processing advantage in lexical decision. These results support the view that polysemous variation in word meaning can be conceptualized in terms of graded variation in distributed semantic representations.
Using a speeded lexical decision task, event-related potentials (ERPs), and minimum norm current source estimates, we investigated early spatiotemporal aspects of cortical activation elicited by words and pseudo-words that varied in their orthographic typicality, that is, in the frequency of their component letter pairs (bi-grams) and triplets (tri-grams). At around 100 msec after stimulus onset, the ERP pattern revealed a significant typicality effect, where words and pseudo-words with atypical orthography (e.g., yacht, cacht) elicited stronger brain activation than items characterized by typical spelling patterns (cart, yart). At approximately 200 msec, the ERP pattern revealed a significant lexicality effect, with pseudo-words eliciting stronger brain activity than words. The two main factors interacted significantly at around 160 msec, where words showed a typicality effect but pseudo-words did not. The principal cortical sources of the effects of both typicality and lexicality were localized in the inferior temporal cortex. Around 160 msec, atypical words elicited the stronger source currents in the left anterior inferior temporal cortex, whereas the left perisylvian cortex was the site of greater activation to typical words. Our data support distinct but interactive processing stages in word recognition, with surface features of the stimulus being processed before the word as a meaningful lexical entry. The interaction of typicality and lexicality can be explained by integration of information from the early form-based system and lexicosemantic processes.
The relationship between recognition memory and repetition priming remains unclear. Priming is believed to reflect increased processing fluency for previously studied items relative to new items. Manipulations that affect fluency can also affect the likelihood that participants will judge items as studied in recognition tasks. This attribution of fluency to memory has been related to the familiarity process, as distinct from the recollection process, that is assumed by dual-process models of recognition memory. To investigate the time courses and neural sources of fluency, familiarity, and recollection, we conducted an event-related potential (ERP) study of recognition memory using masked priming of test cues and a remember/know paradigm. During the recognition test, studied and unstudied words were preceded by a brief, masked word that was either the same or different. Participants decided quickly whether each item had been studied ("old" or "new"), and for items called old, indicated whether they "remembered" (R) the encoding event, or simply "knew" (K) the item had been studied. Masked priming increased the proportion of K, but not R, judgments. Priming also decreased response times for hits but not correct rejections (CRs). Four distinct ERP effects were found. A medial-frontal FN400 (300-500 msec) was associated with familiarity (R, K Hits > CRs) and a centro-parietal late positivity (500-800 msec) with recollection (R Hits > K Hits, CRs). A long-term repetition effect was found for studied items judged "new" (Misses > CRs) in the same time window as the FN400, but with a posterior distribution. Finally, a centrally distributed masked priming effect was visible between 150 and 250 msec and continued into the 300-500 msec time window, where it was topographically dissociable from the FN400. These results suggest that multiple neural signals are associated with repetition and potentially contribute to recognition memory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.