Despite a substantial research literature on the influence of dimensions and exercises in assessment centers (ACs), the relative impact of these two sources of variance continues to raise uncertainties because of confounding. With confounded effects, it is not possible to establish the degree to which any one effect, including those related to exercises and dimensions, influences AC ratings. In the current study (N = 698) we used Bayesian generalizability theory to unconfound all of the possible effects contributing to variance in AC ratings. Our results show that ≤ 1.11% of the variance in AC ratings was directly attributable to behavioral dimensions, suggesting that dimension-related effects have no practical impact on the reliability of ACs.Even when taking aggregation level into consideration, effects related to general performance and exercises accounted for almost all of the reliable variance in AC ratings. The implications of these findings for recent dimension-and exercise-based perspectives on ACs are discussed. When, in the context of selection, appraisal, and development, behavioral criteria are used to evaluate individuals, it is essential that these criteria are measured reliably.Unsurprisingly, therefore, the measurement properties of assessment center (AC) ratings have come under close scrutiny in the applied psychology literature. In ACs, the behavior of jobholders or candidates is sampled across several work-related situations (exercises, e.g., a role play exercise, group discussion, presentation) and is typically assessed by trained assessors in terms of pre-defined behavioral dimensions (e.g., communication skills, teamwork, planning and organizing). As a result of their multifaceted measurement properties, incorporating dimensions, exercises, and assessors, ACs provide a rich source of information about the extent to which work-related behavioral criteria can be reliably measured in a job-relevant setting.Historically, researchers have questioned the extent to which behavioral dimensions are measured reliably in ACs, and have implied that researchers should utilize an exercise-oriented approach to scoring ACs
Fundamental to disaster readiness planning is developing training strategies to compensate for the limited opportunities available for acquiring actual disaster response experience. With regard to communication, decision making and integrated emergency management response, the need to develop mental models capable of reconciling knowledge of multiple goals with the collective expertise of those responding represents a significant challenge for training. This paper explores the utility of the assessment centre as a developmental resource capable of achieving this goal. In addition to providing multiple, expertly evaluated simulations to facilitate the development and practice of specific skills, the ability of assessment centre methodology to promote tacit knowledge and self-efficacy renders it an appropriate vehicle for developing the mental models that underpin the core disaster management competencies of situational awareness and naturalistic and team decision making.
Although student integration theory, a sociologically-based model, has been the dominant explanation for student drop-out from colleges for over 40 years, it has received only mixed empirical support in residential colleges and less in non-residential colleges. Psychological theories of active choice and behavior change off er an alternative explanation for drop-out. In research at a non-residential UK university, structural equation modeling was used in two separate studies to compare a model of student dropout based on student integration theory with a psychological model based on the theory of planned behavior (TPB). In the first study (N=633), a model including TPB variables and two key student integration theory variables (academic integration, and social integration) showed good fit to the data, Although all three TPB variables predicted intention to quit, neither of the two student integration theory variables did so. The TPB variables explained over 60% of the variance in student's intention to voluntarily withdraw from college before completing their studies, and intention to withdraw was associated with actual dropout behavior. In the second study (N=180), using alternative measures of student integration theory factors, a model including both student integration theory and TPB variables had acceptable fit, and over 70% of the variance in intention to quit was explained. But only the TPB variables predicted intention to quit significantly. The benefits of adopting a process-based psychological explanation to student retention are discussed.with retention problems suffer a significant loss of income; and for countries, higher education systems which can increase social mobility and provide the specialized intellectual and skills required in the 21 st century are undermined by high levels of student attrition (Seidman, 2012, pp. 2-3). The importance of retention in higher education is reflected in the wide range of locations in which research on retention has been carried out in recent years, including Australia
The inability of assessment center (AC) researchers to find admissible solutions for confirmatory factor analytic (CFA) models that include dimensions has led some to conclude that ACs do not measure dimensions at all. This study investigated whether increasing the indicator-factor ratio facilitates the achievement of convergent and admissible CFA solutions in 2 independent ACs. Results revealed that, when models specify multiple behavioral checklist items as manifest indicators of each latent dimension, all of the AC CFA models tested were identified and returned proper solutions. When armed with the ability to undertake a full set of model comparisons using model fit rather than solution convergence and admissibility as comparative criteria, we found clear evidence for modest dimension effects. These results suggest that the frequent failure to find dimensions in models of the internal structure of ACs is a methodological artifact and that one approach to increase the likelihood for reaching a proper solution is to increase the number of manifest indicators for each dimension factor. In addition, across exercise dimension ratings and the overall assessment rating were both strongly correlated with dimension and exercise factors, indicating that regardless of how an AC is scored, exercise variance will continue to play a key role in the scoring of ACs.
I. PurposeThis document's intended purpose is to provide professional guidelines and ethical considerations for users of the assessment center method. These guidelines are designed to cover both existing and future applications. The title assessment center is restricted to those methods that follow these guidelines.These guidelines will provide (1) guidance to industrial/organizational/work psychologists, organizational consultants, human resource management specialists and generalists, and others who design and conduct assessment centers; (2) information to managers deciding whether or not to institute assessment center methods; (3) instruction to assessors serving on the staff of an assessment center; and (4) guidance on the use of technology and navigating multicultural contexts; (5) information for relevant legal bodies on what are considered standard professional practices in this area.
In this study, 476 participants, divided into occupational psychology (OP)-, Chartered Institute of Personnel and Development (CIPD)-, human resource management (HRM)-qualified, and layperson subgroups, provided their perceptions of the validity, fairness, and frequency of use of employee selection methods. Results of a mixed-effects analysis of covariance revealed that respondent qualification background predicted the degree to which participant validity perceptions were aligned with research-based estimates of validity, F [3, 29.39] = 20.06, p < .001, g 2 = .67. Corrected pairwise comparisons suggested that perceptions of participants with CIPD and HRM backgrounds were not significantly more aligned with research estimates of validity than were the perceptions of laypeople. OP participant validity perceptions were significantly more aligned with research estimates than all other subgroups, (p < .03). Evidence was also found for some between-group consistency regarding frequency-of-use perceptions, but less between-group consistency was found vis-a-vis perceptions of fairness. Implications for decision-making in employee selection are discussed. Practitioner pointsKnowledge about employee selection measures might not be effectively shared between those with CIPD-and HRM-related qualifications versus those with OP-related qualifications. Laypeople and respondents with CIPD-and HRM-related qualifications were found to deviate similarly from up-to-date research findings about the validity of selection measures. Respondents with OP-related qualifications were more closely aligned with up-to-date findings about the validity of selection measures than were other comparison groups.
Despite their popularity and capacity to predict performance, there is no clear consensus on the internal measurement characteristics of situational judgement tests (SJTs). Contemporary propositions in the literature focus on treating SJTs as methods, as measures of dimensions, or as measures of situational responses. However, empirical evidence relating to the internal structure of SJT scores is lacking. Using generalizability theory, we decomposed multiple sources of variance for three different SJTs used with different samples of job candidates (N1 = 2,320; N2 = 989; N3 = 7,934). Results consistently indicated that (1) the vast majority of reliable observed score variance reflected SJT‐specific candidate main effects, analogous to a general judgement factor, and that (2) the contribution of dimensions and situations to reliable SJT variance was, in relative terms, negligible. These findings do not align neatly with any of the proposals in the contemporary literature; however, they do suggest an internal structure for SJTs. Practitioner points To help optimize reliable variance, overall‐level aggregation should be used when scoring SJTs. The majority of reliable variance in SJTs reflects a general performance factor, relative to variance pertaining to specific dimensions or situations. SJT‐based developmental feedback should be delivered in terms of general SJT performance rather than on performance relating to specific dimensions or situations. Generalizability theory, although underutilized in organizational multifaceted measurement, offers an approach to informing on the psychometric properties of SJTs that is well suited to the complexities of SJT measurement designs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.