LY N N E P. B A L DW I N Brunel University, UKA B S T R AC T The reasoning behind popular methods for analysing the raw data generated by multiple choice question (MCQ) tests is not always appreciated, occasionally with disastrous results.This article discusses and analyses three options for processing the raw data produced by MCQ tests. The article shows that one extreme option is not to penalize a student for wrong answers or for missing out questions, and the other extreme option is actually to penalize both aspects.The intermediate option of focusing on the number of questions actually attempted while penalizing wrong answers can be regarded as the fairest. In this case blind guessing will on average not help the student, although partial knowledge will lessen the negative impact on the final overall score. There are still many interesting challenges in designing techniques for MCQ tests.
K E Y WO R D S : computer-based assessment, examinations, multiple choice question tests, penalizing guessing, test scores
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.