This study investigates the degree to which subgroup (Black-White) mean differences on various assessment center exercises (e.g., inbasket, role play) may be a function of the type of exercise employed; and furthermore, begins to explore why these different types of exercises result in subgroup differences. The sample consisted of 633 participants who completed a managerial assessment center that evaluated them on 14 ability dimensions across 7 different types of assessment exercises. In addition, each participant completed a cognitive ability measure. The results suggest that subgroup differences varied by type of assessment exercise; and furthermore that the subgroup difference appeared to be a function of the cognitive component of the exercise. Lastly, preliminary support is found that the validity of some of the assessment center exercises in predicting supervisor ratings of job performance is based, in part, on their cognitive component; however, evidence of incremental validity does exist.This study investigates the degree to which subgroup (Black-White) mean differences on test scores are a function of the type of assessment center exercise (e.g., in-basket, role play) used to collect the data; and
This study investigates whether different job‐relevant competencies vary in terms of Black‐White subgroup differences exhibited. There were 633 participants (545 Whites, 88 Blacks) who completed a managerial assessment center that evaluated 13 competency dimensions across 8 assessment exercises. Participants also completed a cognitive ability test. The results suggest that subgroup differences vary by the content domain of the competency. As predicted, significant subgroup differences emerged for a majority of the more cognitively loaded competencies (e.g., judgment) while nonsignificant differences were associated with a majority of the less cognitively loaded competencies (e.g., human relations). Furthermore, when cognitive ability was controlled, 12 of 13 competency scores demonstrated incremental validity in predicting supervisory job performance ratings. In addition, competencies with greater cognitive load tended to more strongly predict cognitive aspects of job performance as compared to noncognitive aspects. However, competencies with less cognitive load did not differentially predict cognitive and noncognitive aspects of job performance.
Intelligence (i.e., g, general mental ability) is an individual difference that is arguably more important than ever for success in the constantly changing, ever more complex world of business (Boal, 2004; Gatewood, Field, & Barrick, 2011). Although the field of industrial–organizational (I–O) psychology initially made substantial contributions to the study of intelligence and its use in applied settings (e.g., Hunter, 1980; Schmidt & Hunter, 1981), we have done relatively little in recent times about studying the nature of the intelligence construct and its measurement. Instead, we have focused predominately on using intelligence to predict performance outcomes and examine racial subgroup differences on intelligence test scores. Although the field of I–O psychology continues to approach intelligence at a surface level, other fields (e.g., clinical psychology, developmental and educational research, and neuropsychology) have continued to study this construct with greater depth and have consequently made more substantial progress in understanding this critical and complex construct. The purpose of this article is to note this lack of progress in I–O psychology and to challenge our field to mount new research initiatives on this critical construct.
Crisis simulations provide organizations with a powerful means of selecting and training its leaders to effectively handle crisis situations. This article provides a hands-on approach to developing high-quality crisis simulations in organizations. Several topics are discussed, including: (1) the use of a Behavioural Crisis Analysis (BCA) to define the critical tasks and knowledge, skills and abilities involved in effectively addressing a crisis; (2) guidelines for designing a high-fidelity crisis exercise; (3) methods for measuring the crisis handling performance of participants during the simulation; and (4) how to use the information obtained from the simulation for selecting, coaching and developing crisis leaders.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.