This chapter reviews literature from approximately mid-1993 through early 1996 in the areas of performance and criteria, validity, statistical and equal opportunity issues, selection for work groups, person-organization fit, applicant reactions to selection procedures, and research on predictors, including ability, personality, assessment centers, interviews, and biodata. The review revolves around three themes: (a) attention toward criteria and models of performance, (b) interest in personality measures as predictors of job performance, and (c) work on the person-organization fit selection model. In our judgment, these themes merge when it is recognized that development of performance models that differentiate criterion constructs reveal highly interpretable relationships between the predictor domain (i.e. ability, personality, and job knowledge) and the criterion domain (i.e. technical proficiency, extra-technical proficiency constructs such as prosocial organizational behavior, and overall job performance). These and related developments are advancing the science of personnel selection and should enhance selection practices in the future.
A total of 52 supervisory personnel were trained under one of three performance-appraisal training programs: rater error (response set) training, observation training, or decision-making training. Halo, leniency, range restriction, and accuracy measures were collected before and after training from the three training groups and a no-training control group. The results suggested that although the traditional rater error training, best characterized as inappropriate response set training, reduced the classic rater errors (or statistical effects), it also detrimentally affected rating accuracy. However, observation and decision-making training caused performance rating accuracy to increase after training, but did little to reduce classic rater effects. The need for a reconceptualization of rater training content and measurement focus was discussed in terms of the uncertain relation between statistical rating effects and accuracy.
68This document is copyrighted by the American Psychological Association or one of its allied publishers.This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Although the value of an appraisal system traditionally has been judged according to reliability and validity indexes, more recent discussion suggests that user acceptance may be critical to a system’s successful implementation and continued use. This study examined the notion of using acceptability as a criterion for evaluating performance appraisal techniques. Data analyses indicated that motivation to rate, trust in others, and situational constraints were predictive of acceptability for both supervisors and job incumbents. In addition, differences in acceptability were found across rating sources and rating forms, with supervisors’perceptions more favorable than job incumbents’ and a global rating form significantly less acceptable to all raters. Results are discussed in terms of usefulness of an acceptability criterion in applied research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.