The authors revisited the demographic diversity variable and team performance relationship using meta-analysis and took a significant departure from previous meta-analyses by focusing on specific demographic variables (e.g., functional background, organizational tenure) rather than broad categories (e.g., highly job related, less job related). They integrated different conceptualizations of diversity (i.e., separation, variety, disparity) into the development of their rationale and hypotheses for specific demographic diversity variable-team performance relationships. Furthermore, they contrasted diversity with the team mean on continuous demographic variables when elevated levels of a variable, as opposed to differences, were more logically related to team performance. Functional background variety diversity had a small positive relationship with general team performance as well as with team creativity and innovation. The relationship was strongest for design and product development teams. Educational background variety diversity was related to team creativity and innovation and to team performance for top management
Because measures of person-organization (P-O) fit are accountable to the same psychometric and legal standards used for other employment tests when they are used for personnel decision making, the authors assessed the criterion-related validity of P-O fit as a predictor of job performance and turnover. Meta-analyses resulted in estimated true criterion-related validities of .15 (k = 36, N = 5,377) for P-O fit as a predictor of job performance and .24 (k = 8, N = 2,476) as a predictor of turnover, compared with a stronger effect of .31 (k = 109, N = 108,328) for the more commonly studied relation between P-O fit and work attitudes. In contrast to the relations between P-O fit and work attitudes, the lower 95% credibility values for the job performance and turnover relations included zero. In addition, P-O fit's relations with job performance and turnover were partially mediated by work attitudes. Potential concerns pertaining to the use of P-O fit in employment decision making are discussed in light of these results.
The authors highlight the importance and discuss the criticality of distinguishing between constructs and methods when comparing predictors. They note that comparisons of constructs and methods in comparative evaluations of predictors result in outcomes that are theoretically to conceptually uninterpretable and thus potentially misleading. The theoretical and practical implications of the distinction between predictor constructs and predictor methods are discussed, with three important streams of personnel psychology research being used to frame this discussion. Researchers, editors, reviewers, educators, and consumers of research are urged to carefully consider the extent to which the construct-method distinction is made and maintained in their own research and that of others, especially when predictors are being compared. It is hoped that this discussion will reorient researchers and practitioners toward a more construct-oriented approach that is aligned with a scientific emphasis in personnel selection research and practice.
The after-action review (AAR; also known as the after-event review or debriefing) is an approach to training based on a review of trainees' performance on recently completed tasks or performance events. Used by the military for decades, nonmilitary organizations' use of AARs has increased dramatically in recent years. Despite the prevalence of AARs, empirical research investigating their effectiveness has been limited. This study sought to investigate the comparative effectiveness of objective AARs (reviews based on an objective recording and playback of trainees' recent performance) and subjective AARs (reviews based on a subjective, memory-based recall of trainees' recent performance). One hundred eighty-eight individuals, participating in 47 4-person teams, were assigned to 1 of 3 AAR conditions and practiced and tested on a cognitively complex performance task. Although there were no significant differences between objective and subjective AAR teams across the 5 training outcomes, AAR teams had higher levels of team performance, team efficacy, openness of communication, and cohesion than did non-AAR teams but no differences in their levels of team declarative knowledge. Our results suggest that AARs are effective at enhancing training outcomes. Furthermore, AARs may not be dependent on objective reviews and therefore may be a viable training intervention when objective reviews are not feasible or possible.
The use of unproctored internet-based testing (UIT) for employee selection is quite widespread. Although this mode of testing has advantages over onsite testing, researchers and practitioners continue to be concerned about potential malfeasance (e.g., cheating and response distortion) under high-stakes conditions. Therefore, the primary objective of the present study was to investigate the magnitude and extent of high-and low-stakes retest effects on the scores of a UIT speeded cognitive ability test and two UIT personality measures. These data permitted inferences about the magnitude and extent of malfeasant responding. The study objectives were accomplished by implementing two within-subjects design studies (Study 1 N ¼ 296; Study 2 N ¼ 318) in which test takers first completed the tests as job applicants (high-stakes) or incumbents (low-stakes) then as research participants (low-stakes). For the speeded cognitive ability measure, the pattern of test score differences was more consonant with a psychometric practice effect than a malfeasance explanation. This result is likely due to the speeded nature of the test. And for the UIT personality measures, the pattern of higher high-stakes scores compared with lower low-stakes scores is similar to those reported for proctored tests in the extant literature. Thus, our results indicate that the use of a UIT administration does not uniquely threaten personality measures in terms of elevated scores under high-stakes testing that are higher than those observed for proctored tests in the extant literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.