2014
DOI: 10.1037/a0035788
|View full text |Cite
|
Sign up to set email alerts
|

Comparative evaluation of three situational judgment test response formats in terms of construct-related validity, subgroup differences, and susceptibility to response distortion.

Abstract: As a testing method, the efficacy of situational judgment tests (SJTs) is a function of a number of design features. One such design feature is the response format. However, despite the considerable interest in SJT design features, there is little guidance in the extant literature as to which response format is superior or the conditions under which one might be preferable to others. Using an integrity-based SJT measure administered to 31,194 job applicants, we present a comparative evaluation of 3 response fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
30
0
1

Year Published

2014
2014
2020
2020

Publication Types

Select...
7
3

Relationship

2
8

Authors

Journals

citations
Cited by 45 publications
(32 citation statements)
references
References 38 publications
(81 reference statements)
0
30
0
1
Order By: Relevance
“…In support of the SJTs-as-methods perspective, several meta-analyses have found that SJT scores indeed relate to general mental ability and personality variables (Arthur et al, 2014;McDaniel & Nguyen, 2001;McDaniel et al, , 2007. However, under this perspective, the internal measurement structure specific to SJTs is essentially sidestepped and, as a result, there is no way of knowing what it is about SJTs that is reliable and, thus, leads to observed correlations with externally measured constructs.…”
Section: Practitioner Pointsmentioning
confidence: 99%
“…In support of the SJTs-as-methods perspective, several meta-analyses have found that SJT scores indeed relate to general mental ability and personality variables (Arthur et al, 2014;McDaniel & Nguyen, 2001;McDaniel et al, , 2007. However, under this perspective, the internal measurement structure specific to SJTs is essentially sidestepped and, as a result, there is no way of knowing what it is about SJTs that is reliable and, thus, leads to observed correlations with externally measured constructs.…”
Section: Practitioner Pointsmentioning
confidence: 99%
“…As these results indicate, it took on average 8.55 more minutes to complete the personality measure on a mobile device (d = −0.49, p < .05). Whereas it is impossible to specify the exact reasons for this difference, the pattern of results is consonant with the fact that longer response latencies are associated with activities and tasks that have higher cognitive demands (Arthur et al, 2014;Bassili & Scott, 1996;Yan & Tourangeau, 2008). The higher cognitive demands arise from the structural differences (and challenges) such as increased scrolling time, interface manipulation, and comprehension time associated with using a small screen mobile device.…”
Section: Because They Have Smaller Screens and Interfaces Does It Tamentioning
confidence: 99%
“…However, past research demonstrated that design variations in SJTs appear to influence subgroup differences. For example, knowledge‐based response instructions showed slightly higher ethnic and sex group differences (Whetzel et al, ) which seems to be due to the higher cognitive load of knowledge‐based response instructions (Whetzel et al, ; see also McDaniel et al, ) (for other examples, see Arthur et al, ; McDaniel, Psotka, Legree, Yost, & Weekley, ; Weng, Yang, Lievens, & McDaniel, ).…”
Section: Discussionmentioning
confidence: 99%