2021
DOI: 10.1111/1748-8583.12393
|View full text |Cite
|
Sign up to set email alerts
|

Preferring the devil you know: Potential applicant reactions to artificial intelligence evaluation of interviews

Abstract: In search of greater resource savings and efficiencies, companies are turning to new technologies in the interview process, such as artificial intelligence evaluation (AIE). However, little is known about candidate reactions to this new tool. We identified outstanding questions regarding reactions to AIE arising from justice and signaling theories and conducted interviews with 33 professionals to understand their perceptions of AIE use in selection. Participants raised issues related to all four types of justi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
50
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 50 publications
(50 citation statements)
references
References 51 publications
0
50
0
Order By: Relevance
“…Indeed, candidates perceive that there is less opportunity for behavioral control when assessments are automated compared to when they are judged by humans, meaning that they feel they are given less chance to perform and manipulate the raters to influence them toward a positive judgment ( Lee, 2018 ; Kaibel et al, 2019 ). Candidates also perceive that there is less social presence when recruitment processes are automated ( Kaibel et al, 2019 ; Mirowska and Mesnet, 2021 ), a view that is also shared by HR professionals ( Fritts and Cabrera, 2021 ; Li et al, 2021 ). However, research into fairness perceptions of algorithmic recruitment tools is still in its infancy and there is scope for further investigations, which could help to inform how fairness perceptions might be improved, particularly for groups that are already underrepresented in organizations and application processes.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Indeed, candidates perceive that there is less opportunity for behavioral control when assessments are automated compared to when they are judged by humans, meaning that they feel they are given less chance to perform and manipulate the raters to influence them toward a positive judgment ( Lee, 2018 ; Kaibel et al, 2019 ). Candidates also perceive that there is less social presence when recruitment processes are automated ( Kaibel et al, 2019 ; Mirowska and Mesnet, 2021 ), a view that is also shared by HR professionals ( Fritts and Cabrera, 2021 ; Li et al, 2021 ). However, research into fairness perceptions of algorithmic recruitment tools is still in its infancy and there is scope for further investigations, which could help to inform how fairness perceptions might be improved, particularly for groups that are already underrepresented in organizations and application processes.…”
Section: Discussionmentioning
confidence: 99%
“…This suggests that algorithmic judgments are perceived as being unfair as they are less able to reflect human values and replicate interpersonal exchanges as an algorithm cannot empathize with candidates like humans can. Indeed, despite acknowledging that algorithmic recruitment tools are more objective than human ratings, and that human ratings can be biased, participants still report perceiving algorithmic recruitment tools are less fair due to the lack of human connection and interaction ( Mirowska and Mesnet, 2021 ). This could explain why algorithmic tools used earlier in the funnel, where there is typically less human interaction, are seen as equally fair to human ratings, while algorithmic tools used later in the funnel, such as during the interview stage, are viewed as less fair than human ratings ( Köchling et al, 2022 ) since there are differences in the level of human connection expected.…”
Section: Algorithmic Recruitment and Procedural Fairnessmentioning
confidence: 99%
See 1 more Smart Citation
“…Our results generally align well with previous research in the field, as highest assessment quality is associated with reduced AI complexity and intangibility, and high reliability. Candidates seem to be skeptical towards complex AI, for which they may lack understanding (see for example Mirowska and Mesnet, 2021 ). Of note, system reliability is only relevant in association with the other dimensions of AI, indicating that the mere mentioning, that AI is reliable does not improve assessment quality perceptions.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, such a personal exchange may provide a less fruitful information to the applicant, and perceptions of job-relatedness lower compared with face-to-face interviews ( Sears et al, 2013 ). Technological interviews brought about less favorable evaluations of the interviewer ( Sears et al, 2013 ), and applicants feel greater uncertainty during the interview ( Mirowska and Mesnet, 2021 ). In this research, we want to investigate the role of AI for such negative reactions in more detail.…”
Section: Introductionmentioning
confidence: 99%