Instructors can use both "multiple-choice" (MC) and "constructed response" (CR) questions (such as short answer, essay, or problem-solving questions) to evaluate student understanding of course materials and principles. This article begins by discussing the advantages and concerns of using these alternate test formats and reviews the studies conducted to test the hypothesis (or perhaps better described as the hope) that MC tests, by themselves, perform an adequate job of evaluating student understanding of course materials. Despite research from educational psychology demonstrating the potential for MC tests to measure the same levels of student mastery as CR tests, recent studies in specific educational domains find imperfect relationships between these two performance measures. We suggest that a significant confound in prior experiments has been the treatment of MC questions as homogeneous entities when in fact MC questions may test widely varying levels of student understanding. The primary contribution of the article is a modified research model for CR/MC research based on knowledge-level analyses of MC test banks and CR question sets from basic computer language programming. The analyses are based on an operationalization of Bloom's Taxonomy of Learning Goals for the domain, which is used to develop a skills-focused taxonomy of MC questions. However, we propose that their analyses readily generalize to similar teaching domains of interest to decision sciences educators such as modeling and simulation programming.
cheating, ethical behavior, student dishonesty, student misconduct, theory of reasoned action,
Identifying variables that predict computer aptitude can help educators and employers target potential students and employees. The authors examine a number of possible explanatory variables including demographic profiles, high school achievements, prior computer training and experience, cognitive styles, and problem-solving abilities.
Both professional certification and academic tests rely heavily on multiple-choice questions, despite the widespread belief that alternate, constructed-response questions are superior measures of a test taker's understanding of the underlying material. Empirically, the search for a link between these two assessment metrics has met with limited success, leading some researchers to conclude that the relationship is close and others to conclude that no relationship exists at all. The authors suggest that "knowledge level" may play a key role in explaining this disparity in findings. This article outlines the theory for such a concept, and investigates the possibility using 172 carefully constructed tests in several entry-level programming classes. The article also discusses several caveats that suggest the usefulness of yet further research in the area.
The large increases in the number of information systems (IS) majors about 10 years ago have been matched by equally large decreases in IS enrollments over the last few years. This article addresses the question of why students choose any major in general, and why students no longer choose to become an IS major in particular. We used a validated survey instrument and the responses from 163 students to examine this question in detail. Not surprisingly, we found that "genuine interest" in the subject was the most salient factor affecting the decision to major in IS. More surprising were what factors did not appear to influence this decision-for example, the promise of good job salaries, job security, the advice of others, or even the image of those who become IS professionals. Students seem aware that information technology employment opportunities exist; if job and salary issues contribute to choosing majors other than IS, it is due to the perception of an unfavorable work/salary ratio for our field rather than one of job security or availability. That is, the amount of work to get an IS degree (the perception of harder-than-average courses) combined with (for many students) the perception of an undesirable amount of continuous training to keep an IS career just do not seem to balance with salary levels. These findings have important implications for the recruiting efforts of IS faculty seeking to attract more IS majors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.