This research was designed to examine differences in the predictive power of alternative scale weighting methods in the context of job evaluation. Two different point‐factor job evaluation instruments were used to evaluate 71 managerial and service jobs in a metropolitan university, and five different weighting models were compared in terms of predictive validity and salary classification. For the job evaluation system having high multicollinearity and validity concentration, no significant differences in accuracy were found among the weighting methods. However, in the more heterogeneous system, prediction models based upon unit weights, correlational weights, and multiple regression had significantly higher predictive validity than models based upon equal raw score weights or rational weights developed by a job evaluation committee. In addition, the weighting models differed substantially in terms of the predicted policy wages and classification structures.
Job evaluation has received a great deal of attention recently, in part because of its potential role in the gender-related pay equity issue. Investigators have been particularly interested in such psychometric characteristics of job evaluation instruments and plans such as reliability, gender bias, construct validity, and predictive accuracy. One aspect of job evaluation meth odology that could directly affect compensation systems is the weighting of factors in scoring jobs. However, very little information exists in the compensation literature about the differ ential effects of alternative weighting methods or the psychometric parameters that may contribute to such differences. The present article reviews past psychometric research related to weighting, and presents evidence of the salary effects of four different weighting methods examined in an applied job evaluation setting. The study sample consisted of 52 jobs in a municipal government. The four weighting methods were: (1) an unweighted raw score composite, (2) equal unit weights, (3) committee-judgmental weights, and (4) multiple regression weights. Results indicated high agreement among the 4 methods in terms of ordinal rankings of the pay rates for the 52 jobs. However, when jobs were classified into pay grades using the alternative weighting models, distinct differences occurred. Particularly relevant was a finding that the weighting models differed in their relative impact on male and female dominated jobs. The article discusses generalizability issues and recommendations to practitioners concerning weighting methodology in job evaluation projects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.