The purpose of this study was to identify the underlying structure of the trait domain of Conscientiousness using scales drawn from 7 major personality inventories. Thirty-six scales conceptually related to Conscientiousness were administered to a large community sample (N = 737); analyses of those scales revealed a hierarchical structure with 6 factors: industriousness, order, self-control, responsibility, traditionalism, and virtue. All 6 factors demonstrated excellent convergent validity. Three of the 6 factors, industriousness, order, and self-control, showed good discriminant validity. The remaining 3 factors-responsibility, traditionalism and virtue-appear to be interstitial constructs located equally between Conscientiousness and the remaining Big Five dimensions. In addition, the 6 underlying factors had both differential predictive validity and provided incremental validity beyond the general factor of Conscientiousness when used to predict a variety of criterion variables, including work dedication, drug use, and health behaviors.
In this article, the authors developed a common strategy for identifying differential item functioning (DIF) items that can be implemented in both the mean and covariance structures method (MACS) and item response theory (IRT). They proposed examining the loadings (discrimination) and the intercept (location) parameters simultaneously using the likelihood ratio test with a free-baseline model and Bonferroni corrected critical p values. They compared the relative efficacy of this approach with alternative implementations for various types and amounts of DIF, sample sizes, numbers of response categories, and amounts of impact (latent mean differences). Results indicated that the proposed strategy was considerably more effective than an alternative approach involving a constrained-baseline model. Both MACS and IRT performed similarly well in the majority of experimental conditions. As expected, MACS performed slightly worse in dichotomous conditions but better than IRT in polytomous cases where sample sizes were small. Also, contrary to popular belief, MACS performed well in conditions where DIF was simulated on item thresholds (item means), and its accuracy was not affected by impact.
The present study investigated whether the assumptions of an ideal point response process, similar in spirit to Thurstone's work in the context of attitude measurement, can provide viable alternatives to the traditionally used dominance assumptions for personality item calibration and scoring. Item response theory methods were used to compare the fit of 2 ideal point and 2 dominance models with data from the 5th edition of the Sixteen Personality Factor Questionnaire (S. Conn & M. L. Rieke, 1994). The authors' results indicate that ideal point models can provide as good or better fit to personality items than do dominance models because they can fit monotonically increasing item response functions but do not require this property. Several implications of these findings for personality measurement and personnel selection are described.
The present study compared the fit of several IRT models to two personality assessment instruments. Data from 13,059 individuals responding to the US-English version of the Fifth Edition of the Sixteen Personality Factor Questionnaire (16PF) and 1,770 individuals responding to Goldberg's 50 item Big Five Personality measure were analyzed. Various issues pertaining to the fit of the IRT models to personality data were considered. We examined two of the most popular parametric models designed for dichotomously scored items (i.e., the two- and three-parameter logistic models) and a parametric model for polytomous items (Samejima's graded response model). Also examined were Levine's nonparametric maximum likelihood formula scoring models for dichotomous and polytomous data, which were previously found to provide good fits to several cognitive ability tests (Drasgow, Levine, Tsien, Williams, & Mead, 1995). The two- and three-parameter logistic models fit some scales reasonably well but not others; the graded response model generally did not fit well. The nonparametric formula scoring models provided the best fit of the models considered. Several implications of these findings for personality measurement and personnel selection were described.
This article proposes an item response theory (IRT) approach to constructing and scoring multidimensional pairwise preference items. Individual statements are administered and calibrated using a unidimensional single-stimulus model. Tests are created by combining multidimensional items with a small number of unidimensional pairings needed to identify the latent metric. Trait scores are then obtained using a multidimensional Bayes modal estimation procedure based on a mathematical model called MUPP, which is illustrated and tested here using Monte Carlo simulations. Simulation results show that the MUPP approach to test construction and scoring provides accurate parameter recovery in both one-and two-dimensional simulations, even with relatively few (say, 15%) unidimensional pairings. The implications of these results for constructing and scoring fake-resistant personality items are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.