Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society 2021
DOI: 10.1145/3461702.3462594
|View full text |Cite
|
Sign up to set email alerts
|

A Step Toward More Inclusive People Annotations for Fairness

Abstract: The Open Images Dataset [16] contains approximately 9 million images and is a widely accepted dataset for computer vision research. As is common practice for large datasets, the annotations are not exhaustive, with bounding boxes and attribute labels for only a subset of the classes in each image. In this paper, we present a new set of annotations on a subset of the Open Images dataset called the MIAP (More Inclusive Annotations for People) subset, containing bounding boxes and attributes for all of the peopl… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 23 publications
(14 citation statements)
references
References 17 publications
0
14
0
Order By: Relevance
“…Several fairness evaluation datasets have been proposed to facilitate fairness assessment by enabling testing of classification performance on images from diverse geographic locations [70] or correlation between detection performance and an income variable of the object [11]. Recent work emphasized the importance of how people images are classified or otherwise analyzed by computer vision systems from early datasets of faces with geographically diverse collection [44,53] or Buolamwini and Gebru [5]'s intersectional benchmark to the recent datasets FairFace [39], Casual Conversations [30] and More Inclusive Images for People (MIAP) [68]. These works offer curated datasets with labels obtained through clear annotation rules and with specific efforts deployed for checking annotation bias.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Several fairness evaluation datasets have been proposed to facilitate fairness assessment by enabling testing of classification performance on images from diverse geographic locations [70] or correlation between detection performance and an income variable of the object [11]. Recent work emphasized the importance of how people images are classified or otherwise analyzed by computer vision systems from early datasets of faces with geographically diverse collection [44,53] or Buolamwini and Gebru [5]'s intersectional benchmark to the recent datasets FairFace [39], Casual Conversations [30] and More Inclusive Images for People (MIAP) [68]. These works offer curated datasets with labels obtained through clear annotation rules and with specific efforts deployed for checking annotation bias.…”
Section: Related Workmentioning
confidence: 99%
“…We describe the datasets we use in Table 1: Casual Conversations [30], OpenImages MIAP [68], and UTK Faces [79] contain images of people and are used in the indicators of harmful label association and/or the same-group similarity search. DollarStreet [11,23], is used in the geographical fairness indicator.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations