The authors used meta-analytic procedures to examine the relationship between specified training design and evaluation features and the effectiveness of training in organizations. Results of the meta-analysis revealed training effectiveness sample-weighted mean ds of 0.60 (k = 15, N = 936) for reaction criteria, 0.63 (k = 234, N = 15,014) for learning criteria, 0.62 (k = 122, N = 15,627) for behavioral criteria, and 0.62 (k = 26, N = 1,748) for results criteria. These results suggest a medium to large effect size for organizational training. In addition, the training method used, the skill or task characteristic trained, and the choice of evaluation criteria were related to the effectiveness of training programs. Limitations of the study along with suggestions for future research are discussed.
The present investigation provides a reanalysis of the employment interview for entry-level jobs that overcomes several limitations of J. E. Hunter and R. F. Hunter's (1984) article. Using a relatively sophisticated multidimensional framework for classifying level of structure, the authors obtained results from a meta-analysis of 114 entry-level interview validity coefficients suggesting that (a) structure is a major moderator of interview validity; (b) interviews, particularly when structured, can reach levels of validity that are comparable to those of mental ability tests; and (c) although validity does increase through much of the range of structure, there is a point at which additional structure yields essentially no incremental validity. Thus, results suggested a ceiling effect for structure. Limitations and directions for future research are discussed.
We used meta‐analytic procedures to investigate the criterion‐related validity of assessment center dimension ratings. By focusing on dimension‐level information, we were able to assess the extent to which specific constructs account for the criterion‐related validity of assessment centers. From a total of 34 articles that reported dimension‐level validities, we collapsed 168 assessment center dimension labels into an overriding set of 6 dimensions: (a) consideration/awareness of others, (b) communication, (c) drive, (d) influencing others, (e) organizing and planning, and (f) problem solving. Based on this set of 6 dimensions, we extracted 258 independent data points. Results showed a range of estimated true criterion‐related validities from .25 to .39. A regression‐based composite consisting of 4 out of the 6 dimensions accounted for the criterion‐related validity of assessment center ratings and explained more variance in performance (20%) than Gaugler, Rosenthal, Thornton, and Bentson (1987) were able to explain using the overall assessment center rating (14%).
This study examined the relationship between the similarity and accuracy of team mental models and compared the extent to which each predicted team performance. The relationship between team ability composition and team mental models was also investigated. Eighty-three dyadic teams worked on a complex skill task in a 2-week training protocol. Results indicated that although similarity and accuracy of team mental models were significantly related, accuracy was a stronger predictor of team performance. In addition, team ability was more strongly related to the accuracy than to the similarity of team mental models and accuracy partially mediated the relationship between team ability and team performance, but similarity did not.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.