In this study, meta-analysis techniques were used to synthesize research on the effectiveness of three major activity-based elementary science programs (ESS, SAPA, and SCIS), which were developed with federal support. In 57 controlled studies, outcomes were measured in over 900 classrooms; the overall mean effect size for all outcome areas was .35. The mean effect size was .52 for science process tests, .16 for science content, and .28 for affective outcomes. On the average, gains also were realized in creativity, intelligence, language, and mathematics. Only 3 of 14 coded study features were related to reported effects: Disadvantaged students derived greater benefits than other students; tests not biased in favor of the activity-based programs resulted in positive but lower effects than those favoring the activity-based approach; and published reports had higher effects than unpublished reports. The effects of particular programs reflect their relative curricular emphases. In three followup studies, student groups that had had activity-based programs in elementary school and had later experienced traditional science programs during middle school years could not be consistently distinguished from control groups.Various aspects of the science curriculum reform period have been examined by reviewers over the years. Lockard (1963)(1964)(1965)(1966)(1967)(1968)(1969)(1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977) cataloged the appearance and disappearance of curriculum projects. Welch (1979) chronologized its history. Weiss (1978), through surveys, andStake &Easley (1978), through case studies, documented the use of programs and the reactions of school personnel and on-site observers. Many researchers, especially doctoral students, investigated effects of implementation and teacher training efforts and effects of the new programs on students and teachers. Smith (1969) and Gallagher (1972) presented early narrative reviews of this work. In addition, periodically, the Educational Resources Information Center (ERIC) at Ohio State and the journal, Science Education, have published summaries of studies of the programs. Most recently meta-analyses of studies evaluating the effects of the reform programs on teaching practices and
One major response to the widespread concern for the state of science education which occurred two decades ago was the launching, with heavy National Science Foundation support, of several cumculum reform projects. As a result of this effort, at the elementary school level in particular, three major cumculum programs were developed and began to be used in classrooms during the late 1960's and early 70's. These programs were the Elementary Science Study (ESS), developed at the Educational Development Center, Newton, Massachusetts; Science-A Process Approach (SAPA), developed under the direction of the Commission on Science Education of the American Association for the Advancement of Science; and the Science Curriculum improvement Study (SCIS), developed at the Lawrence Hall of Science at the University of California at Berkeley. The final products of all three programs were different from the science programs which had been traditionally used in elementary schools, in that they did not use textbooks and they made extensive use of laboratory activities, for which materials were supplied by the firms marketing the programs. Also, at least as much attention was given to teaching the methods of science as to teaching its content. By 1977, a national survey (Weiss, 1978) indicated that the three programs were being used by 20% of the nation's elementary teachers in grades one through three and 30% of teachers in grades four through six. Although each of the projects included formative evaluation activities to ensure that materials were usable and effective in the project's own terms, there was no nationally coordinated, independent effort to assess their effect on student outcomes. MethodIn the present study, effects of these programs on various student outcomes were assessed by quantitatively combining the results of 57 reported evaluations of the programs. Only evaluations in which control groups were used were included. The metric used for the size of program effects was the standardized difference between means of experimental and control groups, (X, -X,-)/S,. Whenever possible, the standard deviation Science Education 69(4): 577-591 (1985) 0 1985 John Wiley & Sons, Inc. CCC 0036-8326/85/577-15$04.00578 BREDDERMAN of the control group was used for standardization so that resulting values could be interpreted as the placement of the average student in the innovative program group within the distribution of control group students. If the control group standard deviation was not reported, the pooled standard deviation was used as reported or derived from MSW.When necessary, effect sizes were derived from reported F , or t values, or from nonparametric statistics as recommended by Glass ( 1978). Since individual researchers often reported results separately for particular student subgroups or for more than one outcome measure, effect sizes were calculated for each reported comparison. Data were compiled for a total of 400 comparisons. In the subsequent analyses, to avoid problems of interdependencies among comparison e...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.