2020
DOI: 10.48550/arxiv.2005.00808
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dimensions of Diversity in Human Perceptions of Algorithmic Fairness

Abstract: Algorithms are increasingly involved in making decisions that affect human lives. Prior work has explored how people believe algorithmic decisions should be made, but there is little understanding of which individual factors relate to variance in these beliefs across people. As an increasing emphasis is put on oversight boards and regulatory bodies, it is important to understand the biases that may affect human judgements about the fairness of algorithms. Building on factors found in moral foundations theory a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 14 publications
0
9
0
Order By: Relevance
“…For instance, Lee et al studied people's perception of trust, fairness, and justice in the context of algorithmic decision-making [56,57] and proposed how to embed these views into a policymaking framework [58]. Other scholars explored people's perceptions of procedural [41] and distributive [84,90] aspects of algorithmic fairness and studied how they relate to individual differences [42,76,108]. Nonetheless, little attention is paid to the public attribution of (moral) responsibility to stakeholders (e.g., [43,56,81]), particularly the prospect of responsibility ascription to the AI system per se.…”
Section: Responsibility Fairness Trust In Hci Literaturementioning
confidence: 99%
“…For instance, Lee et al studied people's perception of trust, fairness, and justice in the context of algorithmic decision-making [56,57] and proposed how to embed these views into a policymaking framework [58]. Other scholars explored people's perceptions of procedural [41] and distributive [84,90] aspects of algorithmic fairness and studied how they relate to individual differences [42,76,108]. Nonetheless, little attention is paid to the public attribution of (moral) responsibility to stakeholders (e.g., [43,56,81]), particularly the prospect of responsibility ascription to the AI system per se.…”
Section: Responsibility Fairness Trust In Hci Literaturementioning
confidence: 99%
“…Party membership is one of the most important predictor variables for political attitudes and behaviors [55], and, increasingly, attitudes and behaviors that are not directly connected to political processes. Partisanship is among the strongest predictors of attitudes toward topics ranging from public health-related attitudes and behaviors during the early stages of the COVID-19 pandemic [47,60] to perceptions of fairness in algorithmic decision-making [59]. People's behaviors are also known to cycle in tangent with presidential terms at a partisan level [29]; for instance, gun sales increase during Democratic presidential terms [41] and donations to women's health and progressive law organizations increase during Republican presidential terms [27,153].…”
Section: Discussionmentioning
confidence: 99%
“…Known relationships From prior work (e.g., [4,10,[19][20][21]25]), we know that explanations affect people's procedural fairness perceptions (i.e., whether people think that the underlying AI's decision-making procedures are fair). Especially the revelation of sensitive features (e.g., gender or race) being used in the process appears to have significant effects [21,25,38].…”
Section: Hypothesismentioning
confidence: 99%
“…We further know that there are several human-specific predictors of fairness perceptions [10,37], which we subsume under Personal Fairness Notion. This may include, e.g., individuals' stance towards affirmative action [22], but may also vary across demographics [20,31]. Finally, by distributive fairness we mean the magnitude of disparities in error rate distributions across demographic groups (e.g., males and females) [5].…”
Section: Hypothesismentioning
confidence: 99%