A standardized rule-based scoring system, the Correct Information Unit (CIU) analysis, was used to evaluate the informativeness and efficiency of the connected speech of 20 non-braindamaged adults and 20 adults with aphasia in response to 10 elicitation stimuli. The interjudge reliability of the scoring system proved to be high, as did the session-to-session stability of performance on measures. There was a significant difference between the non-brain-damaged and aphasic speakers on each of the five measures derived from CIU and word counts. However, the three calculated measures (words per minute, percent CIUs, and CIUs per minute) more dependably separated aphasic from non-brain-damaged speakers on an individual basis than the two counts (number of words and number of CIUs).
A standard rule-based system was used to evaluate the presence, accuracy, and completeness of main concepts in the connected speech of 20 non-brain-damaged adults and 20 adults with aphasia. Main concepts form a skeletal outline of the most important information (or "gist") in a message. The interjudge and intrajudge reliability of the main concept scoring system and the test-retest stability of scores were acceptable. The non-brain-damaged group produced significantly more Accurate/complete main concepts, and significantly fewer Accurate/incomplete, Inaccurate, and Absent main concepts than the group with aphasia. However, when the performance of individual subjects was evaluated, what best discriminated the performance of subjects with aphasia from that of non-brain-damaged subjects was not the number of main concepts they failed to mention but the accuracy and completeness of the main concepts they did produce. Measures of main concept production may be a clinically useful complement to other measures of communicative informativeness and efficiency.
The effect of speech sample size on the test-retest stability of two measures of connected speech—words per minute (WPM) and percent of words that are correct information units (Percent CIUs)—was evaluated. A standard set of 10 stimuli was used to elicit connected speech from 20 non-brain-damaged adults and 20 adults with aphasia. Each subject’s responses to the 10 stimuli were transcribed and scored for WPM and Percent CIUs. Then each subject’s responses to the 10 stimuli were randomly divided to produce smaller speech samples representing his or her responses to 1, 2, 3, 4, 5, and 7 stimuli. The test-retest stability of the WPM and Percent ClUs measures was then evaluated for each of the smaller sample sizes and for the complete 10-stimulus sample. For both groups, the test-retest stability of the two measures increased as sample size increased, with the greatest increases occurring as samples increased in size from those representing 1 stimulus to those representing 4 or 5 stimuli, with smaller increases in stability thereafter. In general, these results suggest that the best balance between high test-retest stability and the time and effort required to transcribe and score speech samples can be achieved with samples representing 4 or 5 stimuli (an average of 300 to 400 words for aphasic subjects), although this will vary across individuals.
This paper reviews the literature concerning auditory comprehension of discourse by adults with and without brain damage. It also reports the results of an investigation of comprehension of spoken narrative discourse by adults with left-hemisphere brain damage and aphasia, right-hemisphere brain damage, or traumatic brain injury (20 subjects per group), as well as that of 40 adults without brain damage. These subjects were tested with the 10 narratives from the
Discourse Comprehension Test
(Brookshire & Nicholas, 1993). Test questions assess comprehension and retention of stated and implied main ideas and details. The performance of the groups with brain damage was qualitatively similar to that of the group with no brain damage, but quantitatively inferior. The performance of the groups with aphasia, right-hemisphere damage, and traumatic brain injury was both qualitatively and quantitatively similar. The performance of the four groups was strongly affected by the salience of information in the stories. All 100 subjects responded correctly to main idea questions more often than to detail questions. The effect of directness was less strong than that of salience, but all four groups produced more correct responses when questions assessed stated information than when they assessed implied information. The effect of directness was greater for detail questions than for main idea questions. Although this study was not designed to assess the validity of any discourse comprehension model, the performance pattern of both subjects with no brain damage and those with brain damage is consistent with a resource allocation model of discourse comprehension.
Aphasic and non-brain-damaged adults were tested with two forms of the Nelson Reading Skills Test (NRST; Hanna. Schell, & Schreiner, 1977). The NRST is a standardized measure of silent reading for students in Grades 3 through 9 and assesses comprehension of information at three levels of inference (literal, translational, and higher level). Subjects' responses to NRST test items were evaluated to determine if their performance differed on literal, translational, and higher level items. Subjects' performance was also evaluated to determine the passage dependency of NRST test items--the extent to which readers had to rely on information in the NRST reading passages to answer test items. Higher level NRST test items (requiring complex inferences) were significantly more difficult for both non-brain-damaged and aphasic adults than literal items (not requiring inferences) or translational items (requiring simple inferences). The passage dependency of NRST test items for aphasic readers was higher than those reported by Nicholas, MacLennan, and Brookshire (1986) for multiple-sentence reading tests designed for aphasic adults. This suggests that the NRST is a more valid measure of the multiple-sentence reading comprehension of aphasic adults than the other tests evaluated by Nicholas et al. (1986).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.