Research on the test structure of the Force Concept Inventory (FCI) has largely been performed with exploratory methods such as factor analysis and cluster analysis. Multidimensional Item Response Theory (MIRT) provides an alternative to traditional exploratory factor analysis which allows statistical testing to identify the optimal number of factors. Application of MIRT to a sample of N ¼ 4716 FCI post-tests identified a 9-factor solution as optimal. Additional analysis showed that a substantial part of the identified factor structure resulted from the practice of using problem blocks and from pairs of similar questions. Applying MIRT to a reduced set of FCI items removing blocked items and repeated items produced a 6-factor solution; however, the factors still had little relation the general structure of Newtonian mechanics. A theoretical model of the FCI was constructed from expert solutions and fit to the FCI by constraining the MIRT parameter matrix to the theoretical model. Variations on the theoretical model were then explored to identify an optimal model. The optimal model supported the differentiation of Newton's 1st and 2nd law; of one-dimensional and three-dimensional kinematics; and of the principle of the addition of forces from Newton's 2nd law. The model suggested by the authors of the FCI was also fit; the optimal MIRT model was statistically superior.
As research-based self-paced e-learning tools become increasingly available, a critical issue educators encounter is implementing strategies to ensure that all students engage with them as intended. Here, we discuss the effectiveness of research-based e-learning tutorials as self-paced learning tools in large enrollment brick and mortar introductory physics courses. These interactive tutorials were developed via research in physics education and were found to be effective for a diverse group of introductory physics students in one-on-one implementation. Instructors encouraged the use of these selfpaced tools in a self-paced learning environment by telling students that they would be helpful for solving the assigned homework problems and that the underlying physics principles in the tutorial problems would be similar to those in the inclass quizzes (which we call paired problems). We find that many students, who struggled in the courses in which these adaptive e-learning tutorials were assigned as a self-study tool, performed poorly on the paired problems. In contrast, a majority of student volunteers in one-on-one implementation greatly benefited from the tutorials and performed well on the paired problems. The significantly lower overall performance on paired problems administered as an in-class quiz compared to the performance of student volunteers who used the research-based tutorials in one-on-one implementation suggests that many students enrolled in introductory physics courses did not effectively engage with the self-paced tutorials outside of class and may have only used them superficially. The findings suggest that many students in need of out-of-class remediation via self-paced learning tools may have difficulty motivating themselves and may lack the self-regulation and time-management skills to engage effectively with tools specially designed to help them learn at their own pace. We conclude by proposing a theoretical framework to help students with diverse prior preparations engage effectively with self-study tools.
The use of machine learning and data mining techniques across many disciplines has exploded in recent years with the field of educational data mining growing significantly in the past 15 years. In this study, random forest and logistic regression models were used to construct early warning models of student success in introductory calculus-based mechanics (Physics 1) and electricity and magnetism (Physics 2) courses at a large eastern land-grant university. By combining in-class variables such as homework grades with institutional variables such as cumulative GPA, we can predict if a student will receive less than a "B" in the course with 73% accuracy in Physics 1 and 81% accuracy in Physics 2 with only data available in the first week of class using logistic regression models. The institutional variables were critical for high accuracy in the first four weeks of the semester. In-class variables became more important only after the first in-semester examination was administered. The student's cumulative college GPA was consistently the most important institutional variable. Homework grade became the most important in-class variable after the first week and consistently increased in importance as the semester progressed; homework grade became more important than cumulative GPA after the first in-semester examination. Demographic variables including gender, race or ethnicity, and first generation status were not important variables for predicting course grade.
Abstract:We describe the development of a Quantum Interactive Learning Tutorial (QuILT) on quantum key distribution, a context which involves a practical application of quantum mechanics. The QuILT helps upper-level undergraduate students learn quantum mechanics using a simple two state system and was developed based upon the findings of cognitive research and physics education research. One protocol used in the QuILT involves generating a random shared key over a public channel for encrypting and decrypting information using single photons with non-orthogonal polarization states, and another protocol makes use of two entangled spin-½ particles. The QuILT uses a guided approach and focuses on helping students build links between the formalism and conceptual aspects of quantum physics without compromising the technical content. We also discuss findings from a preliminary in-class evaluation.
Testwiseness is defined as the set of cognitive strategies used by a student that is intended to improve his or her score on a test regardless of the test's subject matter. Questions with elements that may be affected by testwiseness are common in physics assessments, even in those which have been extensively validated and widely used as evaluation tools in physics education research. The potential effect of several elements of testwiseness were analyzed for questions in the Force Concept Inventory (FCI) and Conceptual Survey on Electricity and Magnetism that contain distractors that are predicted to be influenced by testwiseness. This analysis was performed using data sets collected between fall 2001 and spring 2014 at one midwestern U.S. university (including over 9500 students) and between Spring 2011 and Spring 2015 at a second eastern U.S. university (including over 2500 students). Student avoidance of "none of the above" or "zero" distractors was statistically significant. The effect of the position of a distractor on its likelihood to be selected was also significant. The effects of several potential positive and negative testwiseness effects on student scores were also examined by developing two modified versions of the FCI designed to include additional elements related to testwiseness; testwiseness produced little effect post-instruction in student performance on the modified instruments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.