Following up on an earlier issue of The Curriculum Journal (Vol. 16, No. 1), this article focuses on learning outcomes in the context of video games. Learning outcomes are viewed from two theoretical frameworks: Kirkpatrick's levels of evaluation and the CRESST model of learning. These are used to analyse the outcomes claimed in journal articles that report empirical work, indicating the usefulness of the frameworks, and the necessity to consider the role of affective learning. The article ends with some comments on the relationship of instructional design to effective games and learning outcomes.
Strong claims are made for the potential educational effectiveness of narrative-based adventure games, but evidence about how to construct effective educational games is needed (Clark, Yates, Early, & Moulton, 2010; O'Neil & Perez, 2008). College students played a computer-based narrative discovery learning game called Crystal Island (Spires et al., 2010), in which they learned about pathogens (in Experiment 1), or one called Cache 17 (Koenig, 2008), in which they learned how electromechanical devices work (in Experiment 2). In media comparison tests, participants who learned by playing the game performed worse than students who learned from a matched slideshow presentation on retention (d = 1.37), transfer (d = 0.57), and difficulty rating (d = 0.93) in Experiment 1 and on posttest score (d = 0.31) and learning time (d = 2.89) in Experiment 2. In value-added tests, taking away the narrative theme concerning a detective story in the Cache 17 game did not significantly affect students' posttest score (d = −0.16) or learning time (d = −0.22) in Experiment 2. Overall, these results provide no evidence that computer-based narrative games offer a superior venue for academic learning under short time spans of under 2 hr. Findings contradict the discovery hypothesis that students learn better when they do hands-on activities in engaging scenarios during learning and the narrative hypothesis that students learn better when games have a strong narrative theme, although there is no evidence concerning longer periods of game play.
Computer-based instructional simulations are becoming more and more ubiquitous, particularly in military and medical domains. As the technology that drives these simulations grows ever more sophisticated, the underlying pedagogical models for how instruction, assessment, and feedback are implemented within these systems must evolve accordingly. In this article, we review some of the existing educational approaches to medical simulations, and present pedagogical methodologies that have been used in the design and development of games and simulations at the University of California, Los Angeles, Center for Research on Evaluation, Standards, and Student Testing. In particular, we present a methodology for how automated assessments of computer-based simulations can be implemented using ontologies and Bayesian networks, and discuss their advantages and design considerations for pedagogical use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.