The purpose of this study is to examine the effects of a false assumption regarding the motivation of examinees on test construction. Simulated data were generated using two models of item responses (the three-parameter logistic item response model alone and in combination with Wise's examinee persistence model) and were calibrated using a Bayesian method. For the conditions studied, biased item parameter estimates resulted from responses from poorly motivated examinees. Bias in item parameter estimates resulted in bias in item information estimates and test information estimates for an optimally constructed test. The direction and magnitude of the bias depended on conditions studied. The implications of the results for test development companies, examinees, and users of test results are discussed.Educational researchers use item response models and related methods to describe the interaction between examinees and test items, design tests, select items, address item bias, and equate and report test scores. Some models, however, do not describe well the interactions between examinees and items. Examinees may respond to test items in ways that are inconsistent with the item response model. These unusual or ''aberrant'' response behaviors on the part of examinees have stimulated interest in research referred to as ''person-fit measurement.'' Drasgow, Levine, and Williams (1985) define person-fit measurement as a model-based attempt to control test pathologies by recognizing unusual patterns. A model is fitted to the item response patterns of a large sample of presumably normal examinees. Subsequently, individual examinees and their response patterns can be ordered according to how well they are fitted by the group model. (p. 67)Most research on person-fit measurement has focused on the development of statistical indices to detect aberrant response patterns (e.g.
The collection of ongoing performance data was reasonably feasible, reliable, and valid.
The purpose of this study was to identify the effect of teaching practices and student motivation on student achievement in mathematics. Two principal component analyses (PCA) were conducted. The first PCA was conducted to cluster 22 items related to teaching practices, in which the items were selected from a teacher questionnaire. The second PCA was conducted to cluster 11 items related to student motivation, in which the items were selected from a student questionnaire. Results from the first PCA revealed that the extraction of four components was found to be related to several frameworks found in the literature on teaching strategies. For the second PCA, two components were extracted which were related to student motivation. These extracted components were then used as two sets of independent variables in a hierarchical regression analysis in order to study their impact on student achievement in mathematics. The study revealed that four teaching practice components and the two student motivation components were significantly related to student academic achievement in mathematics on the large-scale assessment.
When large-scale assessments (LSA) do not hold personal stakes for students, students may not put forth their best effort. Low-effort examinee behaviors (e.g., guessing, omitting items) result in an underestimate of examinee abilities, which is a concern when using results of LSA to inform educational policy and planning. The purpose of this study was to explore the relationship between examinee motivation as defined by expectancy-value theory, student effort, and examinee mathematics abilities. A principal components analysis was used to examine the data from Grade 9 students ( n = 43,562) who responded to a self-report questionnaire on their attitudes and practices related to mathematics. The results suggested a two-component model where the components were interpreted as task-values in mathematics and student effort. Next, a hierarchical linear model was implemented to examine the relationship between examinee component scores and their estimated ability on a LSA. The results of this study provide evidence that motivation, as defined by the expectancy-value theory and student effort, partially explains student ability estimates and may have implications in the information that get transferred to testing organizations, school boards, and teachers while assessing students’ Grade 9 mathematics learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.