2010 10th IEEE International Conference on Advanced Learning Technologies 2010
DOI: 10.1109/icalt.2010.58
|View full text |Cite
|
Sign up to set email alerts
|

Computerized Adaptive Testing Based on Decision Tree

Abstract: This paper proposes a new computerized adaptive testing employing a decision tree model, instead of test theories. The attribute variable of the model is examinees' responses to each item and the output variable is examinees' test total scores. Some simulation experiments show better performances of the proposed method compared to the traditional methods and solve the problems.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
17
0
2

Year Published

2013
2013
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 15 publications
(17 reference statements)
0
17
0
2
Order By: Relevance
“…DTs are supervised methods built by minimising the square error in the estimation of an explanatory variable (Rokach and Maimon, 2014). As mentioned above, the available research work using the DT methodology as an alternative for CATs, use either the total test's score (Yan et al, 2004;Ueno and Songmuang, 2010) or an external criterion as dependent variable (Delgado-Gomez et al, 2016;Riley et al, 2011). In this section we present a methodology for building a DT that minimises the MSE in the trait's estimation (instead of the test score used in the aforementioned works).…”
Section: Building a Cat With Minimum Msementioning
confidence: 99%
See 2 more Smart Citations
“…DTs are supervised methods built by minimising the square error in the estimation of an explanatory variable (Rokach and Maimon, 2014). As mentioned above, the available research work using the DT methodology as an alternative for CATs, use either the total test's score (Yan et al, 2004;Ueno and Songmuang, 2010) or an external criterion as dependent variable (Delgado-Gomez et al, 2016;Riley et al, 2011). In this section we present a methodology for building a DT that minimises the MSE in the trait's estimation (instead of the test score used in the aforementioned works).…”
Section: Building a Cat With Minimum Msementioning
confidence: 99%
“…the minimum Expected Posterior Variance (EPV) (van der Linden and Pashley, 2009), Maximum Likelihood Weighted Information (MLWI) (Veerkamp and Berger, 1997), Kullback-Leibler information (KL) (Chang and Ying, 1996) or mutual information (MI) (Weissman, 2007). Notwithstanding these item selection techniques have solved many of the mentioned weaknesses, the computational cost of some of them limits their application in practice, in particular because of the need of numerical integration (Ueno and Songmuang, 2010).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To evaluate on real data the models that we presented above, we can embed them in a unified framework: all of them can be seen as decision trees (Ueno andSongmuang 2010, Yan et al 2014), where nodes are possible states of the test, and edges are followed according to the answers provided by the learner, like a flowchart. Thus, within a node, we have access to an incomplete response pattern, and we want to use our student model and infer the behavior of the learner over the remaining questions.…”
Section: Comparison Of Adaptive Testing Modelsmentioning
confidence: 99%
“…Non-IRT adaptive testing methods for score estimation have been developed as well. For example, Yan, Lewis, and Stocking (2004); Ueno and Songmuang (2010);and Riley, Funk, Dennis, Lennox, and Finkelman (2011) have used CART as an item selection algorithm in adaptive testing, and compared its performance with IRT-based item selection. Note that this approach differs from the sequential testing approach as described earlier, as CART was used for item selection in these studies, not for test selection.…”
mentioning
confidence: 99%