2020
DOI: 10.48550/arxiv.2012.00423
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring

Abstract: Ranking algorithms are being widely employed in various online hiring platforms including LinkedIn, TaskRabbit, and Fiverr. Since these platforms impact the livelihood of millions of people, it is important to ensure that the underlying algorithms are not adversely affecting minority groups. However, prior research has demonstrated that ranking algorithms employed by these platforms are prone to a variety of undesirable biases. To address this problem, fair ranking algorithms (e.g., Det-Greedy) which increase … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…However, this gap may be especially important to be considered in a research direction that often seeks algorithmic solutions to inequities stemming from multiple causes, including the actions of other platform participants; for example, much work has analyzed (statistical or taste-based) discrimination on online platforms in which, even conditional on exposure, one type of stakeholders are treated inequitably by other stakeholders (see, e.g., racial discrimination by employers [46,118]). In such settings, fair-exposure based algorithms may not uniformly or even substantially improve outcomes (we give an example in Appendix Table 2); this was recently underscored by Sühr et al [151], which found through a user survey that such algorithms' effectiveness substantially depends on context such as job description and candidate profiles.…”
Section: Provider Utility Beyond Position-based Exposurementioning
confidence: 99%
See 1 more Smart Citation
“…However, this gap may be especially important to be considered in a research direction that often seeks algorithmic solutions to inequities stemming from multiple causes, including the actions of other platform participants; for example, much work has analyzed (statistical or taste-based) discrimination on online platforms in which, even conditional on exposure, one type of stakeholders are treated inequitably by other stakeholders (see, e.g., racial discrimination by employers [46,118]). In such settings, fair-exposure based algorithms may not uniformly or even substantially improve outcomes (we give an example in Appendix Table 2); this was recently underscored by Sühr et al [151], which found through a user survey that such algorithms' effectiveness substantially depends on context such as job description and candidate profiles.…”
Section: Provider Utility Beyond Position-based Exposurementioning
confidence: 99%
“…Thus, past exposure plays a huge role in determining the long-term effects of future exposure; denial of early exposure could risk the viability of small providers [114]. Though one may intuitively think that continuous re-balancing of exposure through fairness-enhancing methods may overcome (or at least reduce) this problem, the real-world-proof is still to be made and early evidence suggests otherwise (see Sühr et al [151]).…”
Section: Spillovers Effects: Compounding Popularity Related Items And...mentioning
confidence: 99%
“…Finally, a few studies find no difference in fairness perceptions, including: Suen, Chen and Lu (2019 [174]) for decision-making during video interviews; and Ötting and Maier (2018 [175]) for perceptions of justice between human, robot and computer decision agents.…”
Section: Human Rights: Fairness Bias and Discriminationmentioning
confidence: 99%
“…Automation bias might be one reason why applicants perceive decisions to be fairer when recruiters only have the option to consult an automated system, as opposed to when they can only slightly change decisions that have already been made by an automated system (Newman, Fast and Harmon, 2020 [170]). Similarly, Suen, Chen and Lu (2019 [174]) find no negative reactions from candidates to algorithmic decisionmaking in personnel selection and argue this might be because algorithmic evaluation only served as a reference for the human decision-maker. Recruiters themselves have also been shown to be more satisfied with personnel selection decisions when they receive a ranking of applicants from an automated support system after they had processed the applicant information themselves.…”
Section: Human In the Loop And The Right To Contestmentioning
confidence: 99%
“…Existing SRAs (e.g., fair machine learning), once introduced into a new social context, may render current technical interventions ineffective, inaccurate, and even dangerously misguided [178]. A recent study [191] found that while fair ranking algorithms such as Det-Greedy [85] help increase the exposure of minority candidates, their effectiveness is limited by the job contexts in which employers have a preference to particular genders. How to properly integrate social context into SRAs is still an open problem.…”
Section: Open Problems and Challengesmentioning
confidence: 99%