2012
DOI: 10.1007/978-3-642-34289-9_45
|View full text |Cite
|
Sign up to set email alerts
|

The Study of Item Selection Method in CAT

Abstract: Abstract. Item selection method is one of the most important parts of computerized adaptive testing. Traditional method is based on the item information function to select the item which has maximum information, with the maximum information test to achieve the purpose of accurate estimates the examinee's ability level. However, this method has high item exposure rate and the test content imbalance problem, etc. To solve these problems, this article introduces a new heuristic item selection method. The results … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…Item selection; When examining the commonly used item selection rules for PIRT, it is found that Fisher Information (FI) and Kullbak-Leibler (KL) derivations are most commonly used (Choi & Swartz, 2009;He et al, 2014;Lu et al, 2012;Veldkamp, 2001). The simulation study examined the performance of unweighted Fisher information (UW-FI), Kullback-Leibler information (FP-KL), and posterior weighted Fisher information (PW-FI) for item selection.…”
Section: Post-hoc Simulationmentioning
confidence: 99%
“…Item selection; When examining the commonly used item selection rules for PIRT, it is found that Fisher Information (FI) and Kullbak-Leibler (KL) derivations are most commonly used (Choi & Swartz, 2009;He et al, 2014;Lu et al, 2012;Veldkamp, 2001). The simulation study examined the performance of unweighted Fisher information (UW-FI), Kullback-Leibler information (FP-KL), and posterior weighted Fisher information (PW-FI) for item selection.…”
Section: Post-hoc Simulationmentioning
confidence: 99%
“…The traditional item generation algorithms entirely rely on the parameters information of item, and the system adaptively selects “optimal” item for learner based on the level of knowledge, so the items are likely to show the same learners as many times in the test or show the same items for most learners in the same test. This will lead to uneven distribution of item exposure and the exposure of part of the items is too high, increasing the risk of leakage questions that affect test security [ 49 ]. The main means to solve this problem is to control exposure to the items.…”
Section: Description Of the Problemmentioning
confidence: 99%
“…Once the CAT has started, various methods can estimate the respondent’s θ after each response [ 20 ]. The next item selected is the one that provides the most information based on the respondent’s prior response [ 21 ]. Lastly, a stop rule can be selected; these can include SE termination, minimum information termination, SE reduction criterion, and a change in θ criterion [ 22 ].…”
Section: Introductionmentioning
confidence: 99%