“…All of these studies found that students' ratings of their Instructors were higher when students were told the ratings were being used for personnel or administrative decisions versus other purposes (Aleamoni & Hexner, 1973Centra, 1976;Driscoll & Goodwin, 1979;Sharon & Bartlett, 1969;Smith, Hassett, & McIrtyre, 1982). Again, these Investigations used leniency error, and not rating accuracy, as the primary Index of rating quality.…”
“…All of these studies found that students' ratings of their Instructors were higher when students were told the ratings were being used for personnel or administrative decisions versus other purposes (Aleamoni & Hexner, 1973Centra, 1976;Driscoll & Goodwin, 1979;Sharon & Bartlett, 1969;Smith, Hassett, & McIrtyre, 1982). Again, these Investigations used leniency error, and not rating accuracy, as the primary Index of rating quality.…”
“…Taylor and Wherry (1951) reported more favorable ratings in a military setting when raters were told that the results would be used for administrative purposes, but Berkshire and Highland (1953) found no significant difference between ratings of military personnel obtained for administrative purposes and those obtained for research purposes. Sharon and Bartlett (1969) reported more favorable student ratings in a college teaching situation when raters were informed that the results might be used administratively, but Centra (1976) found that student ratings in a college teaching situation did not appear large enough or consistent enough to have practical significance.…”
An extensive review of the research concerning the effect of different variables on student ratings is presented. A study is then reported comparing the effects of different sets of instructions on student evaluations of the course and instructor. The results indicated that the students who were informed that the results of their ratings would be used for administrative decisions rated the course and instructor more favorably on all aspects than students who were informed that the results of their ratings would only be used by the instructor.In the mad rush to make courses "relevant" and meet new demands of accountability, colleges and universities have proposed many methods of evaluating the effectiveness of instruction. Such proposals generally indicate that many elements of the instructional setting need to be evaluated by several different audiences. Unfortunately, most proposals that are operationalized rest solely on the use of student ratings of instructors and informal colleague opinions. That students are able to provide reliable and valid evaluations of instructional quality has come to be recognized (Aleamoni, 1978;Costin et al., 1971).Much of the research on student rating of instructors has been concerned with the effect of different variables on these ratings. Due in part to the use of different course evaluation forms and to the use of differing research methodologies, the results of these investigations are often discrepant.Some of the variables which have been investigated include (a) reliability and validity of student ratings, (b) reliability and validity of student rating instruments, (c) class size, (d) sex of the student and sex of the instructor, (e)
“…For example, as numerous researchers have demonstrated, the purpose of the appraisal affects rating processes and outcomes (Bernardin & Beatty, 1984;DeNisi, Cafferty, & Meglino, 1984;Murphy, Balzer, Kellam, & Armstrong, 1984;Sharon & Bartlett, 1969;Williams, DeNisi, Blencoe, & Cafferty, 1985;Zedeck & Cascio, 1982). Appraisals conducted for developmental purposes, for example, are less prone to rating biases (say, elevation or leniency) than are appraisals conducted for administrative decision-making purposes (Meyer et al, 1965;Zedeck & Cascio, 1982).…”
Over two decades ago, Bernardin and Beatty (1984) identified many interdependent purposes of performance appraisal, including to improve the use of resources and serve as a basis for personnel actions. Similarly, Cleveland and
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.