Automatically grading essay questions can have advantages for instructors in higher education. Understanding and specifying how grading is done manually, so that there is potential to do it automatically, is a labor-intensive effort in knowledge elicitation, acquisition, and representation. This paper describes how an interdisciplinary team used conceptual graphs to formally specify the model for a good essay response, and then how that expert model was used as the standard by which the student responses were judged. The methodology is then described for creating the expert model for student responses. These were compared using two different approaches. It was found that most students included the most important concepts, but those student answers that were more complete (i.e., also including concepts of lesser importance) received higher grades. The approaches are then evaluated in terms of reliability and validity, and finally, suggestions are made for future work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.