Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 2021
DOI: 10.1145/3442188.3445920
|View full text |Cite
|
Sign up to set email alerts
|

One Label, One Billion Faces

Abstract: Computer vision is widely deployed, has highly visible, societyaltering applications, and documented problems with bias and representation. Datasets are critical for benchmarking progress in fair computer vision, and often employ broad racial categories as population groups for measuring group fairness. Similarly, diversity is often measured in computer vision datasets by ascribing and counting categorical race labels. However, racial categories are ill-defined, unstable temporally and geographically, and have… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(5 citation statements)
references
References 47 publications
0
4
0
Order By: Relevance
“…These services, generally available 'off-the-shelf' to anyone, aim to determine an individual's facial characteristics including physical or demographic traits based on an image of their face. However, human faces are not a homogeneous group [75]; the set of images used to train and evaluate the underlying model will have a significant influence over the model's behaviour and reported performance. Without knowing specifically how and where their customers will use their services, the AI API providers are at constant risk of failing to envision, let alone account for, all the various contexts their models' might encounter.…”
Section: Universalitymentioning
confidence: 99%
“…These services, generally available 'off-the-shelf' to anyone, aim to determine an individual's facial characteristics including physical or demographic traits based on an image of their face. However, human faces are not a homogeneous group [75]; the set of images used to train and evaluate the underlying model will have a significant influence over the model's behaviour and reported performance. Without knowing specifically how and where their customers will use their services, the AI API providers are at constant risk of failing to envision, let alone account for, all the various contexts their models' might encounter.…”
Section: Universalitymentioning
confidence: 99%
“…Scientists may choose a given racial classification for a variety of reasons, including widespread acceptance, the ability to facilitate comparisons across studies, and stability [40]. Inconsistencies in racial categories have been noted in many disciplines including survey methods [36], public health [18,26], and computer vision [19,39]. Although differences in racial classification can affect research conclusions [36], researchers often fail to explain or justify their operationalizations of race [23,39].…”
Section: Racial Classification In Scientific Researchmentioning
confidence: 99%
“…Databases used for training facial recognition algorithms contain assumptions about the static and apolitical nature of these categories, but rarely explain how their categories were constructed (Scheuerman et al, 2020). These databases also often use labels generated by outsourced human labor like Amazon Mechanical Turk, which rely on human assumptions drawn from visual images alone, and only designate a small number of racial categories (Khan & Fu, 2021). The policies used when creating datasets therefore have lasting impacts on the categories to which we have access, the unfairness we can measure, and the policy changes we recommend based on model outcomes (Kasy & Abebe, 2021).…”
Section: Parallels To Other Fields and Related Workmentioning
confidence: 99%