2017
DOI: 10.21890/ijres.327907
|View full text |Cite
|
Sign up to set email alerts
|

Computerized Adaptive Test (CAT) Applications and Item Response Theory Models for Polytomous Items

Abstract: This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response model, partial credit and generalized partial credit models and graded response model are described carefully to reach that aim. Likewise, item selection methods, such as … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(17 citation statements)
references
References 24 publications
(37 reference statements)
0
13
0
1
Order By: Relevance
“…Moreover, MCTest can gain new functionalities for a student user like question timer, automatic feedback of questions and mark computation, all these independently of Google Form and Sheets. By considering a scenario in which thousands of students are sitting an exam simultaneously, it would also be important for MCTest to automatically calibrate each question's level of difficulty through an Item Response Theory (IRT) (Aybek and Demirtasli, 2017). In this sense, MCTest already emails some statistics of the automatic correction of the digitized exams to the professor, and for that it uses IRT.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, MCTest can gain new functionalities for a student user like question timer, automatic feedback of questions and mark computation, all these independently of Google Form and Sheets. By considering a scenario in which thousands of students are sitting an exam simultaneously, it would also be important for MCTest to automatically calibrate each question's level of difficulty through an Item Response Theory (IRT) (Aybek and Demirtasli, 2017). In this sense, MCTest already emails some statistics of the automatic correction of the digitized exams to the professor, and for that it uses IRT.…”
Section: Discussionmentioning
confidence: 99%
“…Like most AIGs the one proposed here produces questions with some variations, then uses computational resources to draw some parameters, shuffle the questions, and group them according to scope and difficulty. One can also use resources like Item Response Theory (IRT) in order to improve the calibration of each question's level of difficulty (Aybek and Demirtasli, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…Esse tipo de testagem tem por característica fundamental aplicar testes específicos para cada sujeito, os quais devem se assemelhar ao seu nível de aptidão (Pasquali, 2007(Pasquali, , 2013. Os CATs têm ganhado espaço nos últimos anos, na literatura internacional, e têm apresentado uma série de vantagens quando comparados aos testes tradicionais do tipo lápis e papel (Aybek & Demirtasli, 2017).…”
Section: Desenvolvimento De Um Banco De Itens Para Avaliar O Transtorunclassified
“…The scoring model for dichotomous grains consists of: a) 1-PL model (Logistic Parameters) which emphasizes one parameter, namely the level of difficulty of the item, b) the 2-PL model which emphasizes two parameters, namely the level of grain difficulty and power difference, and c) the 3-PL model emphasizes three parameters, namely the level of difficulty of the item, different power and pseudo guessing (Mardapi, 2012). Scoring models for politomus items that are often used include the Graded Response Model (GRM), Modified Graded Response Model (MGRM), and Partial Credit Model (PCM) (Aybek & Demirtasli, 2017).…”
Section: Introductionmentioning
confidence: 99%