2015
DOI: 10.1590/s0104-40362015000300003
|View full text |Cite
|
Sign up to set email alerts
|

On the complementarity of classical test theory and item response models: item difficulty estimates and computerized adaptive testing

Abstract: This study aims to provide statistical evidence of the complementarity between classical test theory and item response models for certain educational assessment purposes. Such complementarity might support, at a reduced cost, future development of innovative procedures for item calibration in adaptive testing. Classical test theory and the generalized partial credit model are applied to tests comprising multiple choice, short answer, completion, and open response items scored partially. Datasets are derived fr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Lord (1953) introduced the principles of this theory, which amends the weaknesses in the classical theory by introducing a method of terminology that features fixed item parameters. These include the difficulty parameter, discrimination, and guessing, in addition to standardizing measurements of difficulty and testees' ability so that test developers can select the most suitable items to classify the testees on the basis of performance levels (Sulaiman, 2009); Costa and Ferrão (Costa & Ferrão, 2015). In the item response theory, the first step is specifying the item parameter and the ability parameter.…”
Section: Introductionmentioning
confidence: 99%
“…Lord (1953) introduced the principles of this theory, which amends the weaknesses in the classical theory by introducing a method of terminology that features fixed item parameters. These include the difficulty parameter, discrimination, and guessing, in addition to standardizing measurements of difficulty and testees' ability so that test developers can select the most suitable items to classify the testees on the basis of performance levels (Sulaiman, 2009); Costa and Ferrão (Costa & Ferrão, 2015). In the item response theory, the first step is specifying the item parameter and the ability parameter.…”
Section: Introductionmentioning
confidence: 99%
“…Considering that in the digital world, any conceptual assessment framework faces two main challenges: (a) the complexity of knowledge, capacities and skills to be assessed; (b) the increasing usability of computer and web-based assessments, which requires innovative approaches to the development, delivery and scoring of tests, Ferrão and Prata (2014;2015) explore the adoption of computerized adaptive testing (CAT), aiming at reducing the test size, and simultaneously controlling the impact of such reduction upon the measurement error, in other words, at the production of tests so structured as to generate results which reflect faithfully the degree of knowledge acquisition of the students. In Costa & Ferrão (2015) the authors conceptually present three essential modules of a CAT platform -Informatics (procedures related to test delivery and data collection), Statistical methods (procedures related to data modelling, scoring and calibration) and Topic contents (Items bank and procedures related to items bank manager) -upon which the Adaptive Test Developer operates. There are two statistical approaches to the analysis of the tests, both as a whole, as well as on a question/item basis: the classical test theory (CTT) and the item response theory (IRT) (Hambleton, Swaminathan, & Rogers, 1991).…”
Section: Introductionmentioning
confidence: 99%