2021
DOI: 10.1027/1866-5888/a000263
|View full text |Cite
|
Sign up to set email alerts
|

Selection Myths

Abstract: Abstract. After nearly two decades of awareness on the research–practice gap in human resource management, this study updates and expands on the seminal findings of Rynes et al. (2002) specific to personnel selection. In a sample of 453 human resource (HR) practitioners in the US and Canada, we found that the research–practice gap persists. Notably, compared to the 2002 findings, HR practitioners tended to be worse at identifying personnel selection myths than was shown by Rynes et al. over 15 years ago, while… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 35 publications
0
14
0
Order By: Relevance
“…Yet, decision-makers' beliefs about predictor validities often diverge from these empirical validity PREDICTING DECISION-MAKERS' ALGORITHM USE 10 estimates. For example, many practitioners involved in hiring still wrongly believe that unstructured interviews are more valid than structured interviews, and they consider conscientiousness more important for predicting job performance than cognitive ability (Fisher et al, 2021). However, decision makers differ in their beliefs about predictor validities (D. J. R. Jackson et al, 2018;Rynes et al, 2002;Sanders et al, 2008).…”
Section: Predictor Validity Beliefsmentioning
confidence: 99%
“…Yet, decision-makers' beliefs about predictor validities often diverge from these empirical validity PREDICTING DECISION-MAKERS' ALGORITHM USE 10 estimates. For example, many practitioners involved in hiring still wrongly believe that unstructured interviews are more valid than structured interviews, and they consider conscientiousness more important for predicting job performance than cognitive ability (Fisher et al, 2021). However, decision makers differ in their beliefs about predictor validities (D. J. R. Jackson et al, 2018;Rynes et al, 2002;Sanders et al, 2008).…”
Section: Predictor Validity Beliefsmentioning
confidence: 99%
“…In a statistically optimal world, complete adoption of the original, non-biased model would leave no room for these human biases to creep into decisions. Yet, again, if the statistically optimal model is not implemented, the default will be the human-only decision, which would likely lead to even more adverse impact than the combination of human and algorithmic judg- We chose these features because they are common in many selection procedures and because decision-makers often disagree with the research in these areas (Fisher et al, 2021;Rynes et al, 2007). Our weighting of each factor follows meta-analysis (Schmidt & Hunter, 1998) showing that experience has lower validity than interviews, which are typically less valid than personality tests and GPA (for incumbents recently out of school; Roth et al, 1996).…”
Section: Trade-offs and Concerns With End User Modifiabilitymentioning
confidence: 99%
“…Scientists largely agree that cognitive abilities, and to a lesser extent personality, are the most relevant constructs that explain differences in academicand job performance (Kuncel et al, 2004;Sackett, Lievens, et al, 2017;Stanek & Ones, 2018). Instead, practitioners primarily consider personality and applied social skills rather than cognitive abilities to be the most important constructs (Fisher et al, 2020;Ryan et al, 2015;Sackett & Walmsley, 2014).With regard to assessment instruments, scientific evidence showed that scores on cognitive ability tests, assessment centers, work sample tests, and structured interviews are valid predictors of job performance (Huffcutt et al, 2014;Ones et al, 2010;Roth et al, 2005;Sackett, Shewach, & Keiser, 2017). However, less valid instruments such as analyses of CV's and cover letters, and unstructured interviews are prevalent in practice (König et al, 2010;Lievens & De Paepe, 2004;Risavy et al, 2019;Zibarras & Woods, 2010).…”
Section: The Science-practice Gapmentioning
confidence: 99%
“…There are different reasons why evidencebased assessment is unterutilized. Reasons are the unawareness of or disbelief in research findings (Fisher et al, 2020;Highhouse, 2008), the restriction of practitioners' autonomy (Nolan & Highhouse, 2014), and the reduction of the credit received from other stakeholders for decisions made (Nolan et al, 2016). Despite substantial progress in research on performance prediction and decision-making, the challenge of increasing the use of evidence-based assessment in selection has not been resolved in the last century (Ryan & Ployhart, 2014, p. 695).…”
mentioning
confidence: 99%