2021
DOI: 10.3390/jintelligence9030046
|View full text |Cite
|
Sign up to set email alerts
|

Systematizing Audit in Algorithmic Recruitment

Abstract: Business psychologists study and assess relevant individual differences, such as intelligence and personality, in the context of work. Such studies have informed the development of artificial intelligence systems (AI) designed to measure individual differences. This has been capitalized on by companies who have developed AI-driven recruitment solutions that include aggregation of appropriate candidates (Hiretual), interviewing through a chatbot (Paradox), video interview assessment (MyInterview), and CV-analys… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(11 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…However, even when group differences in scores are not due to differences in ability, they do not always lead to adverse impact, especially when the analysis is based on a small sample ( EEOC et al 1978 ). Therefore, further validation is needed with a larger sample to more robustly determine whether the reported group differences could result in adverse impact, particularly since the importance of transparency and fairness in the algorithms used in hiring is increasingly a point of concern ( Kazim et al 2021 ; Raghavan et al 2020 ); Mitigating bias: While the potential for adverse impact from this assessment echoes concerns about the fairness of conventional selection assessments ( Hough et al 2001 ), adverse impact associated with algorithmic recruitment processes can be mitigated by removing the items associated with group differences and updating the algorithms ( HireVue 2019 ; Pymetrics 2021 ), unlike with traditional assessments that use a standard scoring key. Further research exploring the potential for mitigating group differences in the algorithms used by this assessment is needed, particularly since there is evidence of measurement bias in the questionnaire-based measure used to construct and validate the algorithms.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, even when group differences in scores are not due to differences in ability, they do not always lead to adverse impact, especially when the analysis is based on a small sample ( EEOC et al 1978 ). Therefore, further validation is needed with a larger sample to more robustly determine whether the reported group differences could result in adverse impact, particularly since the importance of transparency and fairness in the algorithms used in hiring is increasingly a point of concern ( Kazim et al 2021 ; Raghavan et al 2020 ); Mitigating bias: While the potential for adverse impact from this assessment echoes concerns about the fairness of conventional selection assessments ( Hough et al 2001 ), adverse impact associated with algorithmic recruitment processes can be mitigated by removing the items associated with group differences and updating the algorithms ( HireVue 2019 ; Pymetrics 2021 ), unlike with traditional assessments that use a standard scoring key. Further research exploring the potential for mitigating group differences in the algorithms used by this assessment is needed, particularly since there is evidence of measurement bias in the questionnaire-based measure used to construct and validate the algorithms.…”
Section: Discussionmentioning
confidence: 99%
“…However, even when group differences in scores are not due to differences in ability, they do not always lead to adverse impact, especially when the analysis is based on a small sample ( EEOC et al 1978 ). Therefore, further validation is needed with a larger sample to more robustly determine whether the reported group differences could result in adverse impact, particularly since the importance of transparency and fairness in the algorithms used in hiring is increasingly a point of concern ( Kazim et al 2021 ; Raghavan et al 2020 );…”
Section: Discussionmentioning
confidence: 99%
“…The literature on EBA contains few case studies: Buolamwini and Gebru [ 24 ] assessed the efficacy of external audits to address biases in facial recognition systems; Mahajan et al [ 25 ] outlined a procedure to audit AI systems that replicate cognitive tasks in radiology workflows; and Kazim et al [ 26 ] applied a systematic audit to algorithmic recruitment systems. However, there is still little understanding of how organisations implement EBA and what challenges they face in the process.…”
Section: Introductionmentioning
confidence: 99%
“…The extant literature on AI audits emphasises the importance of auditing practices that can take such theoretical foundations into account by systematically assessing not only the technical (e.g., predictive accuracy and explainability) but also the non-technical (e.g., underpinning design logics and principles) [14,15]. This body of work therefore recognises the importance of proactive auditing of AI systems and their design logics as proposed by this paper.…”
Section: Unravelling Ai Design Logicsmentioning
confidence: 91%