Scientific reasoning is an important component under the cognitive strand of the 21st century skills and is highly emphasized in the new science education standards. This study focuses on the assessment of student reasoning in control of variables (COV), which is a core sub-skill of scientific reasoning. The main research question is to investigate the extent to which the existence of experimental data in questions impacts student reasoning and performance. This study also explores the effects of task contexts on student reasoning as well as students’ abilities to distinguish between testability and causal influences of variables in COV experiments. Data were collected with students from both USA and China. Students received randomly one of two test versions, one with experimental data and one without. The results show that students from both populations (1) perform better when experimental data are not provided, (2) perform better in physics contexts than in real-life contexts, and (3) students have a tendency to equate non-influential variables to non-testable variables. In addition, based on the analysis of both quantitative and qualitative data, a possible progression of developmental levels of student reasoning in control of variables is proposed, which can be used to inform future development of assessment and instruction.
This study used eye-tracking technology to investigate students’ visual attention while taking the Force Concept Inventory (FCI) in a web-based interface. Eighty nine university students were randomly selected into a pre-test group and a post-test group. Students took the 30-question FCI on a computer equipped with an eye-tracker. There were seven weeks of instruction between the pre- and post-test data collection. Students’ performance on the FCI improved significantly from pre-test to post-test. Meanwhile, the eye-tracking results reveal that the time students spent on taking the FCI test was not affected by student performance and did not change from pre-test to post-test. Analysis of students’ attention to answer choices shows that on the pre-test students primarily focused on the naïve choices and ignored the expert choices. On the post-test, although students had shifted their primary attention to the expert choices, they still kept a high level of attention to the naïve choices, indicating significant conceptual mixing and competition during problem solving. Outcomes of this study provide new insights on students’ conceptual development in learning physics.
In a recent study, the 30-question Force Concept Inventory (FCI) was theoretically split into two 14-question "half-length" tests (HFCIs) covering the same set of concepts and producing mean scores that can be equated to those of the original FCI. The HFCIs require less administration time and reduce testretest issues when different versions are used in pre-post testing. This study experimentally evaluates the practical validity and measurement uncertainty of the HFCIs with three different college student populations. Measured mean scores on each HFCI were within ∼3% of each other at every university. Measured mean score differences between the HFCI and FCI were also within ∼3%. These differences are less than the value of a single question on the 30-question FCI and are not statistically significant. The overall results suggest that, in conditions similar to this study, the HFCIs can be used as alternatives to the full-length FCI when total scores or score gains are the measurement goals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.