2009
DOI: 10.1080/08959280903248310
|View full text |Cite
|
Sign up to set email alerts
|

The Importance of Exercise and Dimension Factors in Assessment Centers: Simultaneous Examinations of Construct-Related and Criterion-Related Validity

Abstract: This study presents a simultaneous examination of multiple evidential bases of the validity of assessment center (AC) ratings. In particular, we combine both construct-related and criterion-related validation strategies in the same sample to determine the relative importance of exercises and dimensions. We examine the underlying structure of ACs in terms of exercise and dimension factors while directly linking these factors to a work-related criterion (salary). Results from an AC (N = 753) showed that exercise… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
18
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(20 citation statements)
references
References 52 publications
2
18
0
Order By: Relevance
“…In the recent meta-analysis by Dilchert and Ones (2009) the relevance of these dimensions for job performance was supported: The best dimensional predictor for job performance was problem solving, followed by influencing others, and organizing and planning, and communication skills. Given that, it should not be too surprising that these four dimensions were also predictive for a work-related criterion (salary) in a recent study by Lievens, Dilchert, and Ones (2009).…”
Section: Job Analysis Methods and Job Requirements Assessedmentioning
confidence: 96%
“…In the recent meta-analysis by Dilchert and Ones (2009) the relevance of these dimensions for job performance was supported: The best dimensional predictor for job performance was problem solving, followed by influencing others, and organizing and planning, and communication skills. Given that, it should not be too surprising that these four dimensions were also predictive for a work-related criterion (salary) in a recent study by Lievens, Dilchert, and Ones (2009).…”
Section: Job Analysis Methods and Job Requirements Assessedmentioning
confidence: 96%
“…In addition, there is now relative consensus that this substantial exercise variance does not represent measurement bias but true cross-situational performance differences of participants across exercises (Lance, 2008;Lance, Hoffman, Gentry, & Baranik, 2008;Lievens, 2002;Lievens, Dilchert, & Ones, 2009). This is because AC exercises present different situational demands to participants, thereby producing variability in performance across exercises (Gibbons & Rupp, 2009;Howard, 2008;Putka & Hoffman, 2013).…”
Section: Abstract: Assessment Center; Interpersonal Dynamics; Trait Amentioning
confidence: 99%
“…The limited research into AC exercises is surprising because a vast body of research has revealed that the largest portions of variance in dimension ratings across exercises in ACs can be attributed to participant performance differences across exercises (also referred to as exercise effects; Kuncel & Sackett, 2014; Lance, Lambert, Gewin, Lievens, & Conway, 2004; Putka & Hoffman, 2013), even though some recent studies also found sizable portions of dimension variance (Hoffman, Melchers, Blair, Kleinmann, & Ladd, 2011; Monahan, Hoffman, Lance, Jackson, & Foster, 2013). In addition, there is now relative consensus that this substantial exercise variance does not represent measurement bias but true cross-situational performance differences of participants across exercises (Lance, 2008; Lance, Hoffman, Gentry, & Baranik, 2008; Lievens, 2002; Lievens, Dilchert, & Ones, 2009). This is because AC exercises present different situational demands to participants, thereby producing variability in performance across exercises (Gibbons & Rupp, 2009; Howard, 2008; Putka & Hoffman, 2013).…”
mentioning
confidence: 99%
“…According to Arthur, Day and Woehr (2008), validity must be established at the start of test construction and prior to operational use. Lievens, Dilchert and Ones (2009) defined validity as the process of collecting evidence (AC results) to determine the meaning of such assessment ratings and the inferences based on these ratings. Validity is defined by the use and purpose of the AC and is crucial to the permissibility of inferences derived from AC measures.…”
Section: Validity Issues Of Assessment Centresmentioning
confidence: 99%
“…The popularity of ACs is because of their many strengths, including that they have little adverse impact and predict a variety of performance criteria (Thornton & Rupp, 2006) with predictive validity correlations ranging from 0.28 to 0.52 (Gaugler, Rosenthal, Thornton, & Bentson, 1987;Hermelin et al, 2007). In addition, the method has been shown to have high criterion-related validity, as well as content validity (Gaugler et al, 1987;Lievens et al, 2009). However, research evidence concerning the internal structure of ACs shows much less support for the construct validity of AC dimensions ( Kleinmann et al, 2011;Tett & Burnett, 2003).…”
Section: Validity Issues Of Assessment Centresmentioning
confidence: 99%