ABSTRACT:Learning progressions are ordered descriptions of students' understanding of a given concept. In this paper, we describe the iterative process of developing a force and motion learning progression and associated assessment items. We report on a pair of studies designed to explore the diagnosis of students' learning progression levels. First, we compare the use of ordered multiple-choice (OMC) and open-ended (OE) items for assessing students relative to the learning progression. OMC items appear to provide more precise diagnoses of students' learning progression levels and to be more valid, eliciting students' conceptions more similarly to cognitive interviews. Second, we explore evidence bearing on two challenges concerning reliability and validity of level diagnoses: the consistency with which students respond to items set in different contexts and the ways in which students interpret and use language in responding to items. As predicted, students do not respond consistently to similar problems set in different contexts. Although the language used in OMC items generally seems to reflect student thinking, misinterpretation of the language in items may lead to inaccurate diagnoses for a subset of students. Both issues are less problematic for classroom applications than for use of learning progressions in large-scale testing.
Insufficient effort responding (IER) affects many forms of assessment in both educational and psychological contexts. Much research has examined different types of IER, IER’s impact on the psychometric properties of test scores, and preprocessing procedures used to detect IER. However, there is a gap in the literature in terms of practical advice for applied researchers and psychometricians when evaluating multiple sources of IER evidence, including the best strategy or combination of strategies when preprocessing data. In this study, we demonstrate how the use of different IER detection methods may affect psychometric properties such as predictive validity and reliability. Moreover, we evaluate how different data cleansing procedures can detect different types of IER. We provide evidence via simulation studies and applied analysis using the ACT’s Engage assessment as a motivating example. Based on the findings of the study, we provide recommendations and future research directions for those who suspect their data may contain responses reflecting careless, random, or biased responding.
Assessments associated with learning progressions are designed to provide diagnostic information about the level and nature of student understanding. Valid interpretations of such diagnoses are only possible when students consistently express the ideas associated with a single learning progression level. Latent class analysis was employed to evaluate whether patterns of expected responses to diagnostic multiple-choice items afforded valid interpretations of learning progression level diagnoses. Results indicated that students with scientifically accurate understanding of the forces acting on an object with constant speed usually reasoned systematically across items, but many other students did not. Consequently, interpretations of learning progression level diagnoses on a proposed learning progression would often be invalid. Analyses of this sort would be useful in developing and validating future learning progressions. ß
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.