Simulations represent more or less exact replicas of tasks, knowledge, skills, and abilities required in actual work behavior. This chapter reviews research on the more traditional high-fidelity simulations (i.e., assessment centers and work samples) and contrasts it with the growing body of research on low-fidelity simulations (i.e., situational judgment tests). Both types of simulations are compared in terms of the following five statements: “The use of simulations enables organizations to make predictions about a broader array of KSAOs,” “We don't know what simulations exactly measure,” “When organizations use simulations, the adverse impact of their selection system will be reduced,” “Simulations are less fakable than personality inventories,” and “Applicants like simulations.” Generally, research results show that these statements apply to both high-fidelity and low-fidelity simulations. Future research should focus on comparative evaluations of simulations, the effects of structuring simulations, and the cross-cultural transportability of simulations.
The inflow of immigrants challenges organizations to consider alternative selection procedures that reduce potential minority (immigrants)-majority (natives) differences, while maintaining valid predictions of performance. To deal with this challenge, this paper proposes response format as a practically and theoretically relevant factor for situational judgment tests (SJTs). We examine a range of response format categories (from traditional multiple-choice formats to more innovative constructed response formats) and conceptually link these response formats to mechanisms underlying minority-majority differences. Two field experiments are conducted with SJTs. Study 1 (274 job seekers) contrasts minority-majority differences in scores on a multiple-choice versus a written constructed response format. Written constructed responses produce much smaller minority-majority differences (d ϭ .28 vs. d ϭ .92). In Study 2 (269 incumbents), scores on a written constructed versus an audiovisual constructed format are compared. The audiovisual format further reduces minority-majority differences (d ϭ .09 vs. d ϭ .41), with validities remaining the same. Results are suggestive of cognitive load as a contributor to the reduction in minority-majority differences, as are rater effects: Scores of raters evaluating transcribed audiovisual responses, which anonymized test takers, produce larger differences. In sum, altering response modality via more realistic response formats (i.e., the audiovisual constructed format) leads to significant reductions in minority-majority differences without impairing criterion-related validity. Implications for selection theory and practice are discussed.
In the context of the diversity–validity dilemma in personnel selection, the present field study compared ethnic subgroup differences on an innovative constructed response multimedia test to other commonly used selection instruments. Applicants (N = 245, 27% ethnic minorities) for entry‐level police jobs completed a constructed response multimedia test, cognitive ability test, language proficiency test, personality inventory, structured interview, and role play. Results demonstrated minor ethnic subgroup differences on constructed response multimedia test scores as compared to other instruments. Constructed response multimedia test scores were related to the selection decision, and no evidence for predictive bias was found. Subgroup differences were also examined on the dimensional level, with cognitively loaded dimension scores displaying larger differences.
The diversity-validity dilemma has been a dominant theme in personnel selection research and practice. As some of the most valid selection instruments display large ethnic performance differences, scientists attempt to develop strategies that reduce ethnic subgroup differences in selection performance, while simultaneously maintaining criterion-related validity. This paper provides an evidence-based overview of the effectiveness of six strategies for dealing with the diversity-validity dilemma: (1) using 'alternative' cognitive ability measures, (2) employing simulations, (3) using statistical approaches to combine predictor and criterion measures, (4) reducing criterion-irrelevant predictor variance, (5) fostering positive candidate reactions, and (6) providing coaching and opportunity for practice to candidates. Three of these strategies (i.e., employing simulation-based assessments, developing alternative cognitive ability measures, and using statistical procedures) are identified as holding the most promise to alleviate the dilemma. Potential areas in need for future research are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.