2021
DOI: 10.2139/ssrn.3824911
|View full text |Cite
|
Sign up to set email alerts
|

Calibration of Polytomous Response Mathematics Achievement Test Using Generalized Partial Credit Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…IRT is a set of algorithms that measure each item's characteristic on a scale, which corresponds to an individual's traits (Ayanwale, 2019;Ayanwale et al, 2022;van der Linden & Glas, 2010). In IRT models such as one-parameter (which looks at the difficulty -b of the item), two-parameter (which looks at the discrimination-a after item difficulty parameter is computed), three-parameter (which looks at the guessing-c in addition to b and a), and four-parameters (which looks at the carelessness-d in addition to b, a and c) the examinee's behaviour is taken into account at the item level (Ayanwale et al, 2018(Ayanwale et al, , 2019Baker, 2004). By modelling at the item level, scores can be reported, and CAT can be developed more efficiently.…”
Section: Literature Reviewmentioning
confidence: 99%
“…IRT is a set of algorithms that measure each item's characteristic on a scale, which corresponds to an individual's traits (Ayanwale, 2019;Ayanwale et al, 2022;van der Linden & Glas, 2010). In IRT models such as one-parameter (which looks at the difficulty -b of the item), two-parameter (which looks at the discrimination-a after item difficulty parameter is computed), three-parameter (which looks at the guessing-c in addition to b and a), and four-parameters (which looks at the carelessness-d in addition to b, a and c) the examinee's behaviour is taken into account at the item level (Ayanwale et al, 2018(Ayanwale et al, , 2019Baker, 2004). By modelling at the item level, scores can be reported, and CAT can be developed more efficiently.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Item response theory makes use of the pattern of replies to all the test items to simulate a test taker's skill and the likelihood that they would answer items correctly. Rather than the raw test results, the focus of IRT is on an examinee's accuracy on a given item (Ayanwale, 2019). Based on the pattern of item responses, the item-pattern scoring method yields a maximum probability trait estimate (Eaton, et al, 2019;Bichi & Talib, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…In spite of the benefits of multiple-choice tests in educational assessments, it is usually faced with problems of written question often referred to as the stem and problem associated with alternative responses called the distractors, due to non-compliance of the principles of item writing [12]. When items are poorly constructed, particular if the distractors are not plausible enough, it may give room for guessing, thus may lead to poor test quality [8].…”
Section: Introductionmentioning
confidence: 99%
“…When items are poorly constructed, particular if the distractors are not plausible enough, it may give room for guessing, thus may lead to poor test quality [8]. Therefore, in pursuance of excellence in the field of assessment, experts are needed in Nigeria to develop multiple choice test items that meet the expected psychometric properties using new techniques [12].…”
Section: Introductionmentioning
confidence: 99%