The conceptual spaces approach has recently emerged as a novel account of concepts. Its guiding idea is that concepts can be represented geometrically, by means of metrical spaces. While it is generally recognized that many of our concepts are vague, the question of how to model vagueness in the conceptual spaces approach has not been addressed so far, even though the answer is far from straightforward. The present paper aims to fill this lacuna.
Could the well-established fact that males tend to score higher than females on the Force Concept Inventory (FCI) be due to gender bias in the questions? The eventual answer to the question hinges on the definition of bias. We assert that a question is biased only if a factor other than ability (in this case gender) affects the likelihood that a student will answer the question correctly. The statistical technique of differential item functioning allows us to control for ability in our analysis of student performance on each of the thirty FCI questions. This method uses the total score on the FCI as the measure of ability. We conclude that the evidence for gender bias in the FCI questions is marginal at best.
According to Stalnaker's Hypothesis, the probability of an indicative conditional, Prðu ! wÞ; equals the probability of the consequent conditional on its antecedent, PrðwjuÞ . While the hypothesis is generally taken to have been conclusively refuted by Lewis' and others' triviality arguments, its descriptive adequacy has been confirmed in many experimental studies. In this paper, we consider some possible ways of resolving the apparent tension between the analytical and the empirical results relating to Stalnaker's Hypothesis and we argue that none offer a satisfactory resolution.
The application of factor analysis to the Force Concept Inventory (FCI) has proven to be problematic. Some studies have suggested that factor analysis of test results serves as a helpful tool in assessing the recognition of Newtonian concepts by students. Other work has produced at best ambiguous results. For the FCI administered as a pre-and post-test, we see factor analysis as a tool by which the changes in conceptual associations made by our students may be gauged given the evolution of their response patterns. This analysis allows us to identify and track conceptual linkages, affording us insight as to how our students have matured due to instruction. We report on our analysis of 427 pre-and post-tests. The factor models for the pre-and post-tests are explored and compared along with the methodology by which these models were fit to the data. The post-test factor pattern is more aligned with an expert's interpretation of the questions' content, as it allows for a more readily identifiable relationship between factors and physical concepts. We discuss this evolution in the context of approaching the characteristics of an expert with force concepts. Also, we find that certain test items do not significantly contribute to the pre-or post-test factor models and attempt explanations as to why this is so. This may suggest that such questions may not be effective in probing the conceptual understanding of our students.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.