2021
DOI: 10.31234/osf.io/4pk6x
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Theoretical evaluation of partial credit scoring of the multiple-choice test item

Abstract: We consider the effect of chance success due to guessing on the error of measurement in multiple-choice tests. In these types of tests, test-takers may eliminate some of the answer options to increase the expected score when guessing at the cost of increased measurement error. We consider an arbitrary multiple-choice test taken by an arbitrary test-taker that is expected to know an arbitrary fraction of its keys and distractors. For this model, we introduce a mathematically optimal scoring system that minimize… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…In a previous contribution (Persson, 2021, hereonafter designated "Paper I"), we made a theoretical comparison of the ideal properties of a number of scoring functions for the multiple-choice test item; that is, the type of selected-response item where there is one and only one correct option ("key") among any number of incorrect ones ("distractors"). Here, we report a similar analysis but for the type of selected-response item where there is an arbitrary integer number k of keys among c options in total; following Ma (2004), we call this type of test item a "multiple-response" one.…”
Section: Introductionmentioning
confidence: 99%
“…In a previous contribution (Persson, 2021, hereonafter designated "Paper I"), we made a theoretical comparison of the ideal properties of a number of scoring functions for the multiple-choice test item; that is, the type of selected-response item where there is one and only one correct option ("key") among any number of incorrect ones ("distractors"). Here, we report a similar analysis but for the type of selected-response item where there is an arbitrary integer number k of keys among c options in total; following Ma (2004), we call this type of test item a "multiple-response" one.…”
Section: Introductionmentioning
confidence: 99%