In order to compete in a global economy, students are going to need resources and curricula focusing on critical thinking and reasoning in science. Despite awareness for the need for complex reasoning, American students perform poorly relative to peers on international standardized tests measuring complex thinking in science. Research focusing on learning progressions is one effort to provide more coherent science curricular sequences and assessments that can be focused on complex thinking about focal science topics. This paper describes an empirically driven, five-step process to develop a three-year learning progression focusing on complex thinking about biodiversity. Our efforts resulted in empirical results and work products including: (1) a revised definition of learning progressions, (2) empirically-driven, three year progressions for complex thinking about biodiversity, (3) an application of statistical approaches for the analysis of learning progression products; (4) Hierarchical Linear Modeling results demonstrating significant student achievement on complex thinking about biodiversity, and (4) Growth Model results demonstrating strengths and weaknesses of the first version of our curricular units. The empirical studies present information to inform both curriculum and assessment development. For curriculum development, the role of learning progressions as templates for the development of organized sequences of curricular units focused on complex science is discussed. For assessment development, learning progression-guided assessments provide a greater range and amount of information that can more reliably discriminate between students of differing abilities than a contrasting standardized assessment measure that was also focused on biodiversity content.
Being able to make claims about what students know and can do in science involves gathering systematic evidence of students' knowledge and abilities. This paper describes an assessment system designed to elicit information from students at many placements along developmental trajectories and demonstrates how this system was used to gather principled evidence of how students reason about food web and food chain disturbances. Specifically, this assessment system was designed to gather information about students' intermediary or middle knowledge on a pathway toward more sophisticated understanding. Findings indicate that in association with a curricular intervention, student gains were significant. However, despite overall gains, some students still struggled to explain what might happen during a disturbance to an ecosystem. In addition, this paper discusses the importance of having a cognitive framework to guide task design and interpretation of evidence. This framework allowed for the gathering of detailed information, which provided insights into the intricacies of how students reason about scientific scenarios. In particular, this assessment system allowed for the illustration of multiple types of middle knowledge that students may possess, indicating that there are multiple "messy middles" students may move through as they develop the ability to reason about complex scientific situations.
Policy documents in science education suggest that even at the earliest years of formal schooling, students are capable of constructing scientific explanations about focal content. Nonetheless, few research studies provide insights into how to effectively provide scaffolds appropriate for late elementary‐age students' fruitful creation of scientific explanations. This article describes two research studies to address the question, what makes explanation construction difficult for elementary students? The studies were conducted in urban fourth, fifth, and sixth grade classrooms where students were learning science through curricular units that contained 8 weeks of scaffold‐rich activities focused on explanation construction. The first study focused on the kind and amount of information scaffold‐rich assessments provided about young students' abilities to construct explanations under a range of scaffold conditions. Results demonstrated that fifth and sixth grade tests provided strong information about a range of students' abilities to construct explanations under a range of supported conditions. On balance, the fourth grade test did not provide as much information, nor was this test curricular‐sensitive. The second study provided information on pre–post test achievement relative to the amount of curricular intervention utilized over the 8‐week time period with each cohort. Results demonstrated that when taking the amount of the intervention into account, there were strong learning gains in all three grade‐level cohorts. In conjunction with the pre–post study, a type‐of‐error analysis was conducted to better understand the nature of errors among younger students. This analysis revealed that our youngest students generated the most incomplete responses and struggled in particular ways with generating valid evidence. Conclusions emphasize the synergistic value of research studies on scaffold‐rich assessments, curricular scaffolds, and teacher guidance toward a more complete understanding of how to support young students' explanation construction. © 2011 Wiley Periodicals, Inc. J Res Sci Teach 49: 141–165, 2012
This article evaluates a validity argument for the degree to which assessment tasks are able to provide evidence about knowledge that fuses information from a progression of core disciplinary ideas in ecology and a progression for the scientific practice of developing evidence-based explanations. The article describes the interpretive framework for the argument, including evidence for how well the assessment tasks are matched to the learning progressions and the methods for interpreting students' responses to the tasks. Findings from a dual-pronged validity study that includes a think-aloud analysis and an item difficulty analysis are presented as evidence. The findings suggest that the tasks provide opportunities for students at multiple ability levels to show evidence of both successes and struggles with the development of knowledge that fuses core disciplinary ideas with the scientific practice of developing evidence-based explanations. In addition, these tasks are generally able to distinguish between different ability-level students. However, some of the assumptions in the interpretive argument are not supported, such as the inability of the data to provide evidence that might neatly place students at a given level on our progressions. Implications for the assessment system, specifically, how responses are elicited from students, are discussed. In addition, we discuss the implications of our findings for defining and redesigning learning progressions. # 2013 Wiley Periodicals, Inc. J Res Sci Teach 50: 597-626, 2013
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.