No abstract
Background and Purpose-To be useful for clinical research, an outcome measure must be feasible to administer and have sound psychometric attributes, including reliability, validity, and sensitivity to change. This study characterizes the psychometric properties of the Stroke Impact Scale (SIS) Version 2.0. Methods-Version 2.0 of the SIS is a self-report measure that includes 64 items and assesses 8 domains (strength, hand function, ADL/IADL, mobility, communication, emotion, memory and thinking, and participation). Subjects with mild and moderate strokes completed the SIS at 1 month (nϭ91), at 3 months (nϭ80), and at 6 months after stroke (nϭ69). Twenty-five subjects had a replicate administration of the SIS 1 week after the 3-month or 6-month test. We evaluated internal consistency and test-retest reliability. The validity of the SIS domains was examined by comparing the SIS to existing stroke measures and by comparing differences in SIS scores across Rankin scale levels. The mixed model procedure was used to evaluate responsiveness of the SIS domain scores to change. Results-Each of the 8 domains met or approached the standard of 0.9 ␣-coefficient for comparing the same patients across time. The intraclass correlation coefficients for test-retest reliability of SIS domains ranged from 0.70 to 0.92, except for the emotion domain (0.57). When the domains were compared with established outcome measures, the correlations were moderate to strong (0.44 to 0.84). The participation domain was most strongly associated with SF-36 social role function. SIS domain scores discriminated across 4 Rankin levels. SIS domains are responsive to change due to ongoing recovery. Responsiveness to change is affected by stroke severity and time since stroke. Conclusions-This new, stroke-specific outcome measure is reliable, valid, and sensitive to change. We are optimistic about the utility of measure. More studies are required to evaluate the SIS in larger and more heterogeneous populations and to evaluate the feasibility and validity of proxy responses for the most severely impaired patients. (Stroke. 1999;30:2131-2140.)
The actual impact of cognitive theory on testing contrasts sharply with its potential impact, which suggests some deep incompatibilities between the areas. This article describes and illustrates a cognitive design system approach that centralizes cognitive theory in developing valid tests. To resolve incompatibilities between cognitive and testing, the cognitive design system approach includes both conceptual and procedural frameworks. To illustrate the cognitive design approach, an item bank for measuring abstract reasoning was generated from cognitive theory (i.e., P. A. Carpenter, M. A. Just, & P. Shell's, 1990, processing theory). The construct validity of the generating item bank was strongly supported by several studies from the cognitive design system approach.Developing tests from cognitive theory has been an intriguing possibility for psychological and educational measurement (Embretson, 1985;Mislevy, 1993;Wittrock & Baker, 1991). Many item types that appear on tests have been studied by contemporary cognitive psychology methods. Often, ability test item types are suitable for studying cognitive theories, because they are complex problem-solving tasks.Despite the interest in cognitive theory, its purported promise for test development is barely realized. Certainly, contemporary cognitive concepts are often used to describe traditionally designed measures. Various ability test scores are described as reflecting parallel versus serial processing, cognitive consistency, executive processing, and so forth (e.g., Kaufman & Kaufman, 1993). However, as noted by Pellegrino (1988), applying cognitive concepts to describe traditional psychometric findings misses the real potential of cognitive theory; namely, cognitive theory is useful for test design. Cognitive psychology research has an incidental bonus for test design, because justifiable operational definitions are required for construct measurement. The specific operations are often detailed descriptions of task stimulus properties. Thus, cognitive research also provides results
change measurement, Rasch models, test theory, simplex models,
Objective. To determine the effects of participation in a low-impact aerobic exercise program on fatigue, pain, and depression; to examine whether intervention groups compared with a control group differed on functional (grip strength and walk time) and disease activity (total joint count, erythrocyte sedimentation rate, and C-reactive protein) measures and aerobic fitness at the end of the intervention; and to test which factors predicted exercise participation. Methods. A convenience sample of 220 adults with rheumatoid arthritis (RA), ages 40 -70, was randomized to 1 of 3 groups: class exercise, home exercise using a videotape, and control group. Measures were obtained at baseline (T1), after 6 weeks of exercise (T2), and after 12 weeks of exercise (T3). Results. Using structural equation modeling, overall symptoms (latent variable for pain, fatigue, and depression) decreased significantly at T3 (P < 0.04) for the class exercise group compared with the control group. There were significant interaction effects of time and group for the functional measures of walk time and grip strength: the treatment groups improved more than the control group (P < 0.005). There were no significant increases in measures of disease activity. Fatigue and perceptions of benefits and barriers to exercise affected participants' amount of exercise, supporting previous research. Conclusion. This study supported the positive effects of exercise on walk time and grip strength, and demonstrated that fatigue and perceived benefits/barriers to exercise influenced exercise participation. Furthermore, overall symptoms of fatigue, pain, and depression were positively influenced in this selective group of patients with RA ages 40 -70 years.
The cognitive characteristics of paragraph comprehension items were studied by comparing models that deal with two general processing stages: text representation and response decision. The models that were compared included the prepositional structure of the text (Kintsch & van Dijk, 1978), various counts of surface structure variables and word frequency (Drum et al., 1981), a taxonomy of levels of text questions (Anderson, 1972), and some new models that combine features of these models. Calibrations from the linear logistic latent trait model allowed evaluation of the impact of the cognitive variables on item responses. The results indicate that successful prediction of item difficulty is obtained from models with wide representation of both text and decision processing. This suggests that items can be screened for processing difficulty prior to being administered to examinees. However, the results also have important implications for test validity in that the two processing stages involve two different ability dimensions. Susan E. Whitely. This work was supported by the Navy Personnel Research and Development Center through the Army Research Office and Battelle Memorial Institute Contract No. 0855. The opinions expressed in this article are those of the authors, are not official, and do not reflect the views of the Departments of the Navy or Army.
Cognitive psychology principles have been heralded as possibly central to construct validity. In this paper, testing practices are examined in three stages: (a) the past, in which the traditional testing research paradigm left little role for cognitive psychology principles, (b) the present, in which testing research is enhanced by cognitive psychology principles, and (c) the future, for which we predict that cognitive psychology's potential will be fully realized through item design. An extended example of item design by cognitive theory is given to illustrate the principles. A spatial ability test that consists of an object assembly task highlights how cognitive design principles can lead to item generation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.