This investigation seeks to confirm several key factors in computer-based versus paper-based assessment. Based on earlier research, the factors considered here include content familiarity, computer familiarity, competitiveness, and gender. Following classroom instruction, freshman business undergraduates (N = 105) were randomly assigned to either a computer-based or identical paper-based test. ANOVA of test data showed that the computer-based test group outperformed the paper-based test group. Gender, competitiveness, and computer familiarity were NOT related to this performance difference, though content familiarity was. Higher-attaining students benefited most from computer-based assessment relative to higher-attaining students under paper-based testing. With the current increase in computer-based assessment, instructors and institutions must be aware of and plan for possible test mode effects.As pointed out by other authors in this special issue, for many reasons, the use of computer-based assessment is increasing. Examples include state drivers' license exams, military training exams, job application exams in the private sector, entrance exams in postsecondary education, and certification exams by professional groups (Russo, 2002;Trotter, 2001). However, in the literature, there is mounting empirical evidence that identical paper-based and computer-based tests will not obtain the same results. Such findings are referred to as the "test mode effect."For example, paper-based test scores were greater than computer-based test scores for both mathematics and English CLEP tests (Mazzeo, Druesne, Raffeld, Checketts and Muhlstein, 1991) and for recognizing fighter plane silhouettes (Federico, 1989); while computer-based test scores were greater than paper-based test scores for a dental hygiene course unit midterm examination (DeAngelis, 2000); though other studies
This investigation considers the effects offeedback on memory with an emphasis on retention of initial error responses. Based on a connectionist model ( Clariana, 1999a), this study hypothesized that delayed-retention memory of initial lesson responses would be greater for delayed feedback compared to immediate feedback, that feedback effects will be greatest with d~cult items, and that there would be a disordinal interaction of feedback timing and item d~ficulty. High school students (n = 52) completed a computer-based lesson with either delayed feedback, single-try immediate feedback, or multiple-try immediate feedback. There was a sign~icant di]Terence for type of feedback, with retention test memory of initial lesson responses greater under delayed feedback than under immediate feedback. Also, instructional feedback effects varied depending on lesson item d~ficulty. The findings indicate that a connectionist model can explain instructional feedback effects.[] Learning involves the interaction of new information provided by instruction with existing information already in the learner's memory (Ausubel, 1968;Brunet, 1990). When a learner commits to a lesson response, that response reflects the learner's immediate understanding of that instructional instance. Lesson responses, and especially initial lesson responses (ILRs) are a useh~ and interesting measure of a learner's existing information. During learning, when an ILR is the correct response, feedback should confirm and strengthen that memory trace. When an ILR is the incorrect response, corrective feedback refutes the ILR. Current models of the effects of feedback focus on what happens to memory associations that correspond to the correct response, but there are few data and no gen-eraUy accepted theory-based explanation for what effect feedback has on ILR errors. It is plausible that the memory trace of an initial error may be weakened because it is an error, it may remain unaffected, or less likely an initial error may be strengthened, perhaps because it has been brought to the attention of the learner.Describing what happens to memory traces of ILR errors is necessary for determining whether ILR errors interfere with attaining correct responses, and so is a key to understanding how feedback works. During retention tests, learners can be asked to identify their ILRs as well as indicating the correct responses. Lesson ILR data can then be compared to the learners' retention test memory of their ILRs. This comparison would indicate whether ILR memory traces are weakened, strengthened, or remain unchanged as a result of the lesson feedback. These data can also be compared to memory of
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.