2 AbstractIRT parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values, and uncertainty is not taken into account. As a consequence, resulting tests might be off target or less informative than expected. In this paper, the process of parameter estimation is described to provide insight into the causes of uncertainty in the item parameters. The consequences of uncertainty are studied. Besides, an alternative automated test assembly algorithm is presented that is robust against uncertainties in the data. Several numerical examples demonstrate the performance of the robust test assembly algorithm, and illustrate the dangers of not taking this uncertainty into account. Finally, some recommendations about the use of robust test assembly and some directions for further research are given.
A major goal in computerized learning systems is to optimize learning, while in computerized adaptive tests (CAT) efficient measurement of the proficiency of students is the main focus. There seems to be a common interest to integrate computerized adaptive item selection in learning systems and testing. Item selection is a well founded building block of CAT. However, there are a number of problems that prevent the application of a standard approach, based on item response theory, of computerized adaptive item selection to learning systems. In this work attention will be paid to three unresolved points: item banking, item selection, and choice of IRT model. All problems will be discussed, and an approach to automated item bank generation is presented. Finally some recommendations are given.
Computerized adaptive testing (CAT) comes with many advantages. Unfortunately, it still is quite expensive to develop and maintain an operational CAT. In this paper, various steps involved in developing an operational CAT are described and literature on these topics is reviewed. Bayesian CAT is introduced as an alternative, and the use of empirical priors is proposed for estimating item and person parameters to reduce the costs of CAT. Methods to elicit empirical priors are presented and a two small examples are presented that illustrate the advantages of Bayesian CAT. Implications of the use of empirical priors are discussed, limitations are mentioned and some suggestions for further research are formulated.
The paper deals with the introduction of empirical prior information in the estimation of candidate's ability within computerized adaptive testing (CAT). CAT is generally applied to improve efficiency of test administration. In this paper, it is shown how the inclusion of background variables both in the initialization and the ability estimation is able to improve the accuracy of ability estimates. In particular, a Gibbs sampler scheme is proposed in the phases of interim and final ability estimation. By using both simulated and real data, it is proved that the method produces more accurate ability estimates, especially for short tests and when reproducing boundary abilities. This implies that operational problems of CAT related to weak measurement precision under particular conditions, can be reduced as well. In the empirical examples, the methods were applied to CAT for intelligence testing in the area of personnel selection and to educational measurement. Other promising applications would be in the medical world, where testing efficiency is of paramount importance as well.
The focus of this article is on the choice of suitable prior distributions for item parameters within item response theory (IRT) models. In particular, the use of empirical prior distributions for item parameters is proposed. Firstly, regression trees are implemented in order to build informative empirical prior distributions. Secondly, model estimation is conducted within a fully Bayesian approach through the Gibbs sampler, which makes estimation feasible also with increasingly complex models. The main results show that item parameter recovery is improved with the introduction of empirical prior information about item parameters, also when only a small sample is available.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.