A review of criterion-related validities of personality constructs indicated that six constructs are useful predictors of important job-related criteria. An inventory was developed to measure the 6 constructs. In addition, 4 response validity scales were developed to measure accuracy of self-description. These scales were administered in three contexts: a concurrent criterion-related validity study, a faking experiment, and an applicant setting. Sample sizes were 9,188,245, and 125, respectively. Results showed that (a) validities were in the .20s (uncorrected for unreliability or restriction in range) against targeted criterion constructs, (b) respondents successfully distorted their self-descriptions when instructed to do so, (c) response validity scales were responsive to different types of distortion, (d) applicants' responses did not reflect evidence of distortion, and (e) validities remained stable regardless of possible distortion by respondents in either unusually positive or negative directions. Recent reviews of criterion-related validity studies show that personality scales, when organized according to predictor construct, correlate significantly with a
The total variance in any observed measure of performance can be attributed to 3 sources: (a) the correlation of the measure with the latent variable of interest'(i.e., true score variance), (b) reliable but irrelevant variance due to contamination, and (c) error. A model is proposed that specifies 3, and only 3, determinants of the relevant variance: declarative knowledge, procedural knowledge and skill, and volitional choice (motivation). The 3 determinants are defined, and their implications for performance measurement are discussed. Using data from the U.S. Army Selection and Classification Project (Project A), the authors found that the model fits a simplex pattern to the criterion data matrix. The predictor-determinant correlations are also estimated. Analyses of the data with LISREL provided strong confirmation of the model.
Organizational research and practice involving ratings are rife with what the authors term ill-structured measurement designs (ISMDs)--designs in which raters and ratees are neither fully crossed nor nested. This article explores the implications of ISMDs for estimating interrater reliability. The authors first provide a mock example that illustrates potential problems that ISMDs create for common reliability estimators (e.g., Pearson correlations, intraclass correlations). Next, the authors propose an alternative reliability estimator--G(q,k)--that resolves problems with traditional estimators and is equally appropriate for crossed, nested, and ill-structured designs. By using Monte Carlo simulation, the authors evaluate the accuracy of traditional reliability estimators compared with that of G(q,k) for ratings arising from ISMDs. Regardless of condition, G(q,k) yielded estimates as precise or more precise than those of traditional estimators. The advantage of G(q,k) over the traditional estimators became more pronounced with increases in the (a) overlap between the sets of raters that rated each ratee and (b) ratio of rater main effect variance to true score variance. Discussion focuses on implications of this work for organizational research and practice.
Recent research suggests multidimensional forced-choice (MFC) response formats may provide resistance to purposeful response distortion on personality assessments. It remains unclear, however, whether these formats provide normative trait information required for selection contexts. The current research evaluated score correspondences between an MFC format measure and 2 Likert-type measures in honest and instructed-faking conditions. In honest response conditions, scores from the MFC measure appeared valid indicators of normative trait standing. Under faking conditions, the MFC measure showed less score inflation than the Likert measure at the group level of analysis. In the individual-level analyses, however, the MFC measure was as affected by faking as was the Likert measure. Results suggest the MFC format is not a viable method to control faking.
This article presents a psychometric approach for extracting normative information from multidimensional forced-choice (MFC) formats while retaining the method's faking-resistant property. The approach draws on concepts from Coombs's unfolding models and modern item response theory to develop a theoretical model of the judgment process used to answer MFC items, which is then used to develop a scoring system that provides estimates of normative trait standings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.