Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 2021
DOI: 10.1145/3442188.3445896
|View full text |Cite
|
Sign up to set email alerts
|

Re-imagining Algorithmic Fairness in India and Beyond

Abstract: Conventional algorithmic fairness is West-centric, as seen in its subgroups, values, and methods. In this paper, we de-center algorithmic fairness and analyse AI power in India. Based on 36 qualitative interviews and a discourse analysis of algorithmic deployments in India, we find that several assumptions of algorithmic fairness are challenged. We find that in India, data is not always reliable due to socio-economic factors, ML makers appear to follow double standards, and AI evokes unquestioning aspiration. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
81
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 102 publications
(84 citation statements)
references
References 98 publications
(76 reference statements)
1
81
0
Order By: Relevance
“…Research done by Sambasivan et al show that typical Western takes on algorithmic fairness are not portable in the context of the cultural values and standards in India. 30 Similar conclusions may hold true for other regions within the Global South. In regions where ethnic, tribal, or religious affiliation hold more power in society compared with Western notions of race, which do not apply in countries such as Nigeria or Pakistan, these demographics should be seriously considered in the development, deployment, and use of AI systems.…”
Section: Understanding the Known Harms And Impact Of Ai-forhealth Interventionsmentioning
confidence: 55%
“…Research done by Sambasivan et al show that typical Western takes on algorithmic fairness are not portable in the context of the cultural values and standards in India. 30 Similar conclusions may hold true for other regions within the Global South. In regions where ethnic, tribal, or religious affiliation hold more power in society compared with Western notions of race, which do not apply in countries such as Nigeria or Pakistan, these demographics should be seriously considered in the development, deployment, and use of AI systems.…”
Section: Understanding the Known Harms And Impact Of Ai-forhealth Interventionsmentioning
confidence: 55%
“…We hope these experimental results will encour- (Sambasivan et al, 2021). These factors lead to the use of a variety of metrics to evaluate biases (Section 3).…”
Section: On How Different Decoding Properties Affect Biases In Generationmentioning
confidence: 99%
“…28,81], so as to resist the "portability trap" of believing that one can simply port AI systems developed in and for one context to others [70]. This may involve developing AI systems (and disaggregated evaluations) in ways that are responsive to local values and norms, as in recent research to re-imagine the fairness of AI systems in India [67], research that has highlighted the risks of algorithmic colonization [11] and efforts toward decolonial AI [49], or, more generally, approaches for "designing for the pluriverse" [23]. However, approaches to the development of AI systems that are responsive local conditions will likely require slow and careful work that is fundamentally at odds with the "pedal to the metal" (P7, T4) approach incentivized by business imperatives.…”
Section: Implications Of Deploying Ai Systems At Scalementioning
confidence: 99%