This paper examines three methodological issues concerning the measurement of semantic memory impairment in brain-damaged patients. Ten carefully selected patients with dementia of Alzheimer's type (DAT) and anomia were studied. A battery of perceptual tests and direct tests of semantic memory led to the conclusion that these patients represented a homogeneous group having a prominent deterioration of their semantic memory store without visual perceptual deficits. The first issue addressed in this patient group was whether verbal fluency impairment accurately reflected the loss of semantic memory. It was found that verbal fluency (generation of semantic category lists) was impaired due to two major constraints: deterioration of semantic memory store, and variable difficulties in semantic search. Verbal fluency, therefore, reflects semantic memory loss to some degree, but is not a direct test of semantic memory store in DAT. The second issue was whether semantic memory impairment in our patients conformed to the 'semantic storage disorder' syndrome hypothesized by Shallice (1987). It was shown that, consistent with this hypothesis, the patients demonstrated co-occurrence of consistency of errors, loss of semantic cueing, and preserved superordinate knowledge with loss of detailed knowledge of concept items. The third issue was whether semantic cueing and semantic priming are altered in a similar manner in DAT. It demonstrated that semantic cueing and semantic priming, using the same words whose concepts were degraded in semantic memory, yielded an entirely different pattern of results. Cueing and priming therefore may not be used interchangeably in the study of semantic loss after brain damage.
We examined automatic spatial alignment effects evoked by handled objects. Using color as the relevant cue carried by an irrelevant handled object aligned or misaligned with the response hand, responses to color were faster when the handle aligned with the response hand. Alignment effects were observed only when the task was to make a reach and grasp response. No alignment effects occurred if the response involved a left-right key press. Alignment effects emerged over time, becoming more apparent either when the color cue was delayed or when relatively long, rather than short, response times were analyzed. These results are consistent with neurophysiological evidence indicating that the cued goal state has a modulatory influence on sensorimotor representations, and that handled objects initially generate competition between neural populations coding for a left- or right-handed action that must be resolved before a particular hand is favored.
Viewing of single words produces a cognitively complex mental state in which anticipation, emotional responses, visual perceptual analysis, and activation of orthographic representations are all occurring. Previous PET studies have produced conflicting results, perhaps due to the conflation of these separate processes or the presence of subtle differences in stimulus material and methodology. A PET study of 10 normal individuals was carried out using the bolus H2(15)O intravenous injection technique to examine components of processing of passively viewed words. Subjects viewed blocks of random-letter strings or abstract, concrete, or emotional words (words with positive or negative emotional salience). Baseline conditions were either passive viewing of plus signs or an anticipatory state (viewing plus signs after being warned to expect words or random letters to appear imminently). All words (and to a lesser extent the random letters) produced robust activation of cerebral blood flow in the left posterior temporal lobe, in addition to bilateral occipital activation. Furthermore, emotional words produced activation in orbital and midline frontal structures. Further activation in the left orbital frontal gyrus, the left inferior temporal gyrus, the left caudate nucleus, the anterior cingulate, and the cerebellum could be ascribed to the anticipatory state. This pattern of activity suggests that the occipital regions are recruited for visual-perceptual analysis of words, and the left temporal lobe represents the neural substrate for the orthographic lexicon. In addition, emotionally relevant material produces further processing in limbic brain structures of the frontal lobes. Detailed analysis of the task therefore substantially clarifies the neuroanatomic basis of single-word processing.
The determination of the visual features mediating letter identification has a long-standing history in cognitive science. Researchers have proposed many sets of letter features as important for letter identification, but no such sets have yet been derived directly from empirical data. In the study reported here, we applied the Bubbles technique to reveal directly which areas at five different spatial scales are efficient for the identification of lowercase and uppercase Arial letters. We provide the first empirical evidence that line terminations are the most important features for letter identification. We propose that these small features, represented at several spatial scales, help readers to discriminate among visually similar letters.
The authors examined spatial frequency (SF) tuning of upright and inverted face identification using an SF variant of the Bubbles technique (F. Gosselin & P. G. Schyns, 2001). In Experiment 1, they validated the SF Bubbles technique in a plaid detection task. In Experiments 2a-c, the SFs used for identifying upright and inverted inner facial features were investigated. Although a clear inversion effect was present (mean accuracy was 24% higher and response times 455 ms shorter for upright faces), SF tunings were remarkably similar in both orientation conditions (mean r ϭ .98; an SF band of 1.9 octaves centered at 9.8 cycles per face width for faces of about 6°). In Experiments 3a and b, the authors demonstrated that their technique is sensitive to both subtle bottom-up and top-down induced changes in SF tuning, suggesting that the null results of Experiments 2a-c are real. The most parsimonious explanation of the findings is provided by the quantitative account of the face inversion effect: The same information is used for identifying upright and inverted inner facial features, but processing has greater sensitivity with the former.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.