2013
DOI: 10.1111/ijsa.12051
|View full text |Cite
|
Sign up to set email alerts
|

Subject Matter Expert Judgments Regarding the Relative Importance of Competencies are not Useful for Choosing the Test Batteries that Best Predict Performance

Abstract: Several recent articles have suggested that assessments of the relative importance of different abilities or competencies to a job have little bearing on the criterion‐related validity of these selection tests that measure those abilities. We hypothesize that selection test batteries chosen to maximize the judged importance of knowledge, skills, and abilities will not predict performance better than batteries of tests chosen at random. The results in two independent samples consistently show that the validity … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 34 publications
0
7
0
Order By: Relevance
“…Not only can there be inconsistency among SMEs, but several studies have reported little to no differences in the accuracy of job analysis ratings between high and low performers (Conley & Sackett, 1987) and between job incumbents and college students (Smith & Hakel, 1979). Moreover, past research has found conflicting evidence for the ability of SMEs to identify the relative validity of traits or personal characteristics based on specific job requirements (e.g., Murphy, Deckert, Kinney, & Kung, 2013;Weekley, Labrador, Campion, & Frye, 2019). If there is uncertainty regarding whether SMEs can provide valid ratings of job performance or job requirements, how can we be sure that they are uniquely able to provide valid judgments about SJT item content?…”
Section: Do Subject Matter Experts Have Expertise?mentioning
confidence: 99%
“…Not only can there be inconsistency among SMEs, but several studies have reported little to no differences in the accuracy of job analysis ratings between high and low performers (Conley & Sackett, 1987) and between job incumbents and college students (Smith & Hakel, 1979). Moreover, past research has found conflicting evidence for the ability of SMEs to identify the relative validity of traits or personal characteristics based on specific job requirements (e.g., Murphy, Deckert, Kinney, & Kung, 2013;Weekley, Labrador, Campion, & Frye, 2019). If there is uncertainty regarding whether SMEs can provide valid ratings of job performance or job requirements, how can we be sure that they are uniquely able to provide valid judgments about SJT item content?…”
Section: Do Subject Matter Experts Have Expertise?mentioning
confidence: 99%
“…Clearly, the labels used to represent content and content validity are not necessarily a reliable indicator of what is being measured (Murphy, 2009a(Murphy, , 2009b, nor are the labeled constructs valid indicators of the source of prediction of performance (Murphy, Deckart, Kinney, & Kung, 2013). When developing a new construct, few researchers look for a general factor (Drasgow, Nye, Carretta, Ree, 2010;Kyllonen, 1993;Stauffer, Ree, & Carretta, 1996).…”
Section: Evidence Supporting the Prevalence Of Dgfs In Human Charactementioning
confidence: 99%
“…However, weights can also be based on regression analysis of primary data, meta-analyses, or subject matter experts (Bobko et al, 2007;Dawes & Corrigan, 1974;Murphy et al, 2013). In mechanical combination, weights are used consistently across judgments.…”
Section: Improving Decision Makingmentioning
confidence: 99%
“…An example of a simple decision rule would be to assign equal weights to a test score, a grade, and an interview rating and to add up the resulting scores. However, weights can also be based on regression analysis of primary data, meta-analyses, or subject matter experts (Bobko et al, 2007; Dawes & Corrigan, 1974; Murphy et al, 2013). In mechanical combination, weights are used consistently across judgments.…”
Section: Improving Decision Makingmentioning
confidence: 99%