The argument for preceding multiple analysis of variance (ANOVAS) with a multivariate analysis of variance (MANOVA) to control for Type I error is challenged. Several situations are discussed in which multiple ANOVAS might be conducted without the necessity of a preliminary MANOVA. Three reasons for considering a multivariate analysis are discussed: to identify outcome variable system constructs, to select variable subsets, and to determine variable relative worth. The analyses discussed in this article are those appropriate in research situations in which analysis of variance techniques are useful. These analyses are used to study the effects of treatment variables on outcome/response variables (in ex post facto as well as experimental studies). We speak of an univariate analysis of variance (ANOVA) when a single outcome variable is involved; when multiple outcome variables are involved, it is a multivariate analysis of variance (MANOVA). (Covariance analyses may also be included.) With multiple outcome variables, the typical analysis approach used in the group-comparison context, at least in the behavioral sciences, is to either (a) conduct multiple ANOVAs or (b) conduct a MANOVA followed by multiple ANOVAS. That these are two popular choices may be concluded from a survey of some prominent behavioral science journals. The 1986 issues of five journals published by the American Psychological Association were surveyed:
The validity of the Lollipop Test: A Diagnostic Screening Test of School Readiness was examined by using the Metropolitan Readiness Test (MRT), Level I, Form Q, as the criterion. The sample of 293 kindergarten pupils was administered the MRT by their teachers in classroom groups; the Lollipop Test was individually administered by qualified examiners. The statistical significance of all correlations (p < .001) demonstrated appreciable concurrent validity across the test batteries. Further, a canonical correlation indicated a high degree of multivariate relationship between the tests. Implications of these results were discussed with respect to school readiness screening and the use of the Lollipop Test.
Several advantages to the use of factor scores as independent variables in a multiple regression equation have been advocated in the literature To provide guidance for selecting the most desirable type of factor score upon which to calculate a regression equation, computer-based Monte Carlo methods were used to compare the predictive accuracy upon replication of regression on five "complete" and four "incomplete"factor score estimation methods. For several levels of multiple correlation (R 2 = .30, .50, and. 70), and for several subjectto-variable sampling ratios (3:1, 5:1, and 10:1), prediction on incomplete factor scores showed better double cross-validated prediction accuracy than on complete factor scores. Moreover, the unique unit-weighted factor score was superior among the incomplete methods.
Cognition is the way we use mental skills to acquire knowledge, manipulate ideas, and process new information and beliefs. The Strategic Thinking Questionnaire (STQ), which measures three such skills – systems thinking – reframing – reflection, was used to collect data from students preparing for school leadership roles at four universities in the United States (USA), Malaysia, Hong Kong, and Shanghai. It was thought that the use of these skills might vary from country to country because of western and eastern cultural norms. Based on self-reported data from 328 educators preparing for school leadership roles we concluded that the use of strategic thinking skills were found in all locations but the variance in their use is more a function of age of respondents, and gender rather than location. These findings have implications for training, professional development, and selection of aspiring leaders.
The long-term predictive validities of the Metropolitan Readiness Tests (MRT) and the Lollipop Test: A Diagnostic Screening Test of School Readiness were examined. The achievement of 246 students in reading and mathematics as measured by the Stanford Achievement Test and teacher assigned grades in first, third, and fourth grades was predicted from kindergarten administrations of each of these test batteries. All multiple correlations for the Lollipop Test and the MRT were found to be significant and similar in magnitude. Perhaps particularly noteworthy, was that the Lollipop Test, a shorter screening instrument, performed as well as the lengthier MRT in predicting school achievement.
A recent condemnation of the use of factor scores as predictors in multiple regression because of the loss in "predictive accuracy" incurred in reducing rank (Kukuk and Baty, 1979) was reexamined from the more important predictive perspective of replication predictive accuracy. Using a computer-based Monte Carlo procedure parallel to that employed in a recent comparison of various types of factor scores (Morris, 1979) the investigator compared the double cross-validation replication predictive accuracies of six types of factor scores with that of full-rank data by utilizing a data set from the literature in which classroom achievement was predicted from affective and cognitive variables. Prediction accuracy was significantly more accurate for each of the six types of factor scores than for fullrank data. Further, corroborative evidence was presented for the superiority of incomplete factor scores over the nonincomplete methods considered. Moreover, implications for reconsidering the sample size typically deemed necessary for factor analysis in predictive situations were offered.EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 1980, 40 A recent article in this journal (Kukuk and Baty, 1979) severely criticized the use of factor scores as predictor variables in multiple regression equations for several reasons. Paramount in the rationale for this criticism was a demonstration of the loss in &dquo;predictive accuracy&dquo; incurred in reducing rank. This point was illustrated with an example data set in which demographic characteristics of the family were used to predict students' reading achievement. ' This report was funded in part by
The WISC-R subtest profiles of 113 children classified as severely emotionally disturbed (88 males and 25 females; 71 Caucasians and 42 Negroes) ranging in age from 6 yr., 11 mo. to 13 yr., 8 mo. was examined. Diagnosis was based on psychological testing and quantitative assessment of behavioral deviations by parents, teachers, and psychologists. Scores for Caucasian children were significantly superior to those of Negro children, on the Information, Similarities, Vocabulary, and Picture Arrangement subtests. However, all subtest means foe both races were significantly lower than those in the standardization sample. A multivariate test of interaction and a Hotelling T2 suggested that the profiles of Caucasian and Negro subjects were not “flat” as those in the standardization sample and also not parallel. Investigation of the shape of the profiles of the races showed an elevated Picture Completion score for the Negro children and a depressed Coding score for Caucasian children. No evidence supported a discrepancy between Verbal and Performance abilities.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.