Computer Self-Efficacy'. Development of a Measure and Initial Test organizations. The existence of a reliable and valid measure of self-efficacy makes assessment possible and should have implications for organizational support, training, and implementation.
While computer training is widely recognized as an essential contributor to the productive use of computers in organizations, very little research has focused on identifying the processes through which training operates, and the relative effectiveness of different methods for such training. This research examined the training process, and compared a behavior modeling training program, based on Social Cognitive Theory (Bandura [Bandura, A. 1977. Self-efficacy: Toward a unifying theory of behavioral change. Psych. Rev. 84(2) 191–215; Bandura, A. 1978. Reflections on self-efficacy. Adv. Behavioral Res. Therapy 1 237–269; Bandura, A. 1982. Self-efficacy mechanism in human agency. Amer. Psychologist 372 122–147; Bandura, A. 1986. Social Foundations of Thought and Action. Prentice-Hall, Englewood Cliffs, NJ.]), to a more traditional, lecture-based program. According to Social Cognitive Theory, watching others performing a behavior, in this case interacting with a computer system, influences the observers' perceptions of their own ability to perform the behavior, or self-efficacy, and the expected outcomes that they perceive, as well as providing strategies for effective performance. The findings provide only partial support for the research model. Self-efficacy exerted a strong influence on performance in both models. In addition, behavior modeling was found to be more effective than the traditional method for training in Lotus 1-2-3, resulting in higher self-efficacy and higher performance. For WordPerfect, however, modeling did not significantly influence performance. This finding was unexpected, and several possible explanations are explored in the discussion. Of particular surprise were the negative relationships found between outcome expectations and performance. Outcome expectations were expected to positively influence performance, but the results indicated a strong negative effect. Measurement limitations are presented as the most plausible explanation for this result, but further research is necessary to provide conclusive explanations.
Information systems researchers, like those in many other disciplines in the social sciences, have debated the value and appropriateness of using students as research subjects. This debate appears in several articles that have been published on the subject as well as in the review process. In this latter arena, however, the debate has become increasingly like a script—the actors (authors and reviewers) simply read their parts of the script; some avoid the underlying issues whereas others cursorily address generalizability without real consideration of those issues. As a result, despite the extent of debate, we seem no closer to a resolution. Authors who use student subjects rely on their scripted arguments to justify the use of student subjects and do not always consider whether those arguments are valid. But reviewers who oppose the use of student subjects are equally culpable. They, too, rely on scripted arguments to criticize work using student subjects, and do not always consider whether those arguments are salient to the particular study. By presenting and reviewing one version of this script in the context of theoretical discussions of generalizability, we hope to demonstrate its limitations so that we can move beyond these scripted arguments into a more meaningful discussion. To do this, we review empirical studies from the period 1990–2010 to examine the extent to which student subjects are being used in the field and to critically assess the discussions within the field about the use of student samples. We conclude by presenting recommendations for authors and reviewers, for determining whether the use of students is appropriate in a particular context, and for presenting and discussing work that uses student subjects.
Organizations today face great pressure to maximize the bene its from their investments in information technology (IT). They are challenged not just to use IT, but to use it as effectively as possible. Understanding how to assess the competence of users is critical in maximizing the effectiveness of IT use. Yet the user competence construct is largely absent from prominent technology acceptance and it models, poorly conceptualized, and inconsistently measured. We begin by presenting a conceptual model of the assessment of user competence to organize and clarify the diverse literature regarding what user competence means and the problems of assessment. As an illustrative study, we then report the findings from an experiment involving 66 participants. The experiment was conducted to compare empirically two methods (paper and pencil tests versus self-report questionnaire), across two different types of software, or domains of knowledge (word processing versus spreadsheet packages), and two different conceptualizations of competence (software knowledge versus self-efficacy). The analysis shows statistical significance in all three main effects. How user competence is measured, what is measured, what measurement context is employed:all influence the measurement outcome. Furthermore, significant interaction effects indicate that different combinations of measurement methods, conceptualization, and knowledge domains produce different results. The concept of frame of reference, and its anchoring effect on subjects' responses, explains a number of these findings. The study demonstrates the need for clarity in both defining what type of competence is being assessed and in drawing conclusions regarding competence, based upon the types of measures used. Since the results suggest that definition and measurement of the user competence construct can change the ability score being captured, the existing information system (IS) models of usage must contain the concept of an ability rating. We conclude by discussing how user competence can be incorporated into the Task-Technology Fit model, as well as additional theoretical and practical implications of our research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.