Algorithmic decision-making is becoming increasingly common as a new source of advice in HR recruitment and HR development. While firms implement algorithmic decision-making to save costs as well as increase efficiency and objectivity, algorithmic decision-making might also lead to the unfair treatment of certain groups of people, implicit discrimination, and perceived unfairness. Current knowledge about the threats of unfairness and (implicit) discrimination by algorithmic decision-making is mostly unexplored in the human resource management context. Our goal is to clarify the current state of research related to HR recruitment and HR development, identify research gaps, and provide crucial future research directions. Based on a systematic review of 36 journal articles from 2014 to 2020, we present some applications of algorithmic decision-making and evaluate the possible pitfalls in these two essential HR functions. In doing this, we inform researchers and practitioners, offer important theoretical and practical implications, and suggest fruitful avenues for future research.
Companies increasingly use artificial intelligence (AI) and algorithmic decision-making (ADM) for their recruitment and selection process for cost and efficiency reasons. However, there are concerns about the applicant’s affective response to AI systems in recruitment, and knowledge about the affective responses to the selection process is still limited, especially when AI supports different selection process stages (i.e., preselection, telephone interview, and video interview). Drawing on the affective response model, we propose that affective responses (i.e., opportunity to perform, emotional creepiness) mediate the relationships between an increasing AI-based selection process and organizational attractiveness. In particular, by using a scenario-based between-subject design with German employees (N = 160), we investigate whether and how AI-support during a complete recruitment process diminishes the opportunity to perform and increases emotional creepiness during the process. Moreover, we examine the influence of opportunity to perform and emotional creepiness on organizational attractiveness. We found that AI-support at later stages of the selection process (i.e., telephone and video interview) decreased the opportunity to perform and increased emotional creepiness. In turn, the opportunity to perform and emotional creepiness mediated the association of AI-support in telephone/video interviews on organizational attractiveness. However, we did not find negative affective responses to AI-support earlier stage of the selection process (i.e., during preselection). As we offer evidence for possible adverse reactions to the usage of AI in selection processes, this study provides important practical and theoretical implications.
The study aims to identify whether algorithmic decision making leads to unfair (i.e., unequal) treatment of certain protected groups in the recruitment context. Firms increasingly implement algorithmic decision making to save costs and increase efficiency. Moreover, algorithmic decision making is considered to be fairer than human decisions due to social prejudices. Recent publications, however, imply that the fairness of algorithmic decision making is not necessarily given. Therefore, to investigate this further, highly accurate algorithms were used to analyze a pre-existing data set of 10,000 video clips of individuals in self-presentation settings. The analysis shows that the under-representation concerning gender and ethnicity in the training data set leads to an unpredictable overestimation and/or underestimation of the likelihood of inviting representatives of these groups to a job interview. Furthermore, algorithms replicate the existing inequalities in the data set. Firms have to be careful when implementing algorithmic video analysis during recruitment as biases occur if the underlying training data set is unbalanced.
Despite the increasing popularity of AI‐supported selection tools, knowledge about the actions that can be taken by organizations to increase AI acceptance is still in its infancy, even though multiple studies point out that applicants react negatively to the implementation of AI‐supported selection tools. Therefore, this study investigates ways to alter applicant reactions to AI‐supported selection. Using a scenario‐based between‐subject design with participants from the working population (N = 200), we varied the information provided by the organization about the reasons for using an AI‐supported selection process (no additional information vs. written information vs. video information) in comparison to a human selection process. Results show that the use of AI without information and with written information decreased perceived fairness, personableness perception, and increased emotional creepiness. In turn, perceived fairness, personableness perceptions, and emotional creepiness mediated the association between an AI‐supported selection process, organizational attractiveness, and the intention to further proceed with the selection process. Moreover, results did not differ for applicants who were provided video explanations of the benefits of AI‐supported selection tools and those who participated in an actual human selection process. Important implications for research and practice are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.