In this meta-analysis, we investigated the effects of methods for providing item-based feedback in a computer-based environment on students' learning outcomes. From 40 studies, 70 effect sizes were computed, which ranged from −0.78 to 2.29. A mixed model was used for the data analysis. The results show that elaborated feedback (EF; e.g., providing an explanation) produced larger effect sizes (0.49) than feedback regarding the correctness of the answer (KR; 0.05) or providing the correct answer (KCR; 0.32). EF was particularly more effective than KR and KCR for higher order learning outcomes. Effect sizes were positively affected by EF feedback, and larger effect sizes were found for mathematics compared with social sciences, science, and languages. Effect sizes were negatively affected by delayed feedback timing and by primary and high school. Although the results suggested that immediate feedback was more effective for lower order learning than delayed feedback and vice versa, no significant interaction was found.
Wald’s (1947) sequential probability ratio test can be implemented as an adaptive test for classifying examinees into categories. However, current implementations use an item selection method that is either random or based on Fisher information (FI), a criterion related to optimized examinee trait estimates. In this study, a method based on Kullback-Leibler information (KLI) was evaluated. Simulation studies were conducted for two- and three-category classifications in which item selection methods based on FI and KLI were compared. Results showed that testing algorithms using KLI-based item selection performed better than or as well as those using FI-based item selection.
The objective of this study was to explore the possibilities for using computerized adaptive testing in situations in which examinees are to be classified into one of three categories. Testing algorithms with two different statistical computation procedures are described and evaluated. The first computation procedure is based on statistical testing and the other on statistical estimation. Item selection methods based on maximum information (MI) considering content and exposure control are considered. The measurement quality of the proposed testing algorithms is reported. The results of the study are that a reduction of at least 22% in the mean number of items can be expected in a computerized adaptive test (CAT) compared to an existing paper-and-pencil placement test. Furthermore, statistical testing is a promising alternative to statistical estimation. Finally, it is concluded that imposing constraints on the MI selection strategy does not negatively affect the quality of the testing algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.